Nov 24 12:44:30 np0005533938 kernel: Linux version 5.14.0-639.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025
Nov 24 12:44:30 np0005533938 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 24 12:44:30 np0005533938 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 24 12:44:30 np0005533938 kernel: BIOS-provided physical RAM map:
Nov 24 12:44:30 np0005533938 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 24 12:44:30 np0005533938 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 24 12:44:30 np0005533938 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 24 12:44:30 np0005533938 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 24 12:44:30 np0005533938 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 24 12:44:30 np0005533938 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 24 12:44:30 np0005533938 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 24 12:44:30 np0005533938 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 24 12:44:30 np0005533938 kernel: NX (Execute Disable) protection: active
Nov 24 12:44:30 np0005533938 kernel: APIC: Static calls initialized
Nov 24 12:44:30 np0005533938 kernel: SMBIOS 2.8 present.
Nov 24 12:44:30 np0005533938 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 24 12:44:30 np0005533938 kernel: Hypervisor detected: KVM
Nov 24 12:44:30 np0005533938 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 24 12:44:30 np0005533938 kernel: kvm-clock: using sched offset of 10733928359 cycles
Nov 24 12:44:30 np0005533938 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 24 12:44:30 np0005533938 kernel: tsc: Detected 2800.000 MHz processor
Nov 24 12:44:30 np0005533938 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 24 12:44:30 np0005533938 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 24 12:44:30 np0005533938 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 24 12:44:30 np0005533938 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 24 12:44:30 np0005533938 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 24 12:44:30 np0005533938 kernel: Using GB pages for direct mapping
Nov 24 12:44:30 np0005533938 kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 24 12:44:30 np0005533938 kernel: ACPI: Early table checksum verification disabled
Nov 24 12:44:30 np0005533938 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 24 12:44:30 np0005533938 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 12:44:30 np0005533938 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 12:44:30 np0005533938 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 12:44:30 np0005533938 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 24 12:44:30 np0005533938 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 12:44:30 np0005533938 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 24 12:44:30 np0005533938 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 24 12:44:30 np0005533938 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 24 12:44:30 np0005533938 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 24 12:44:30 np0005533938 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 24 12:44:30 np0005533938 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 24 12:44:30 np0005533938 kernel: No NUMA configuration found
Nov 24 12:44:30 np0005533938 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 24 12:44:30 np0005533938 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 24 12:44:30 np0005533938 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 24 12:44:30 np0005533938 kernel: Zone ranges:
Nov 24 12:44:30 np0005533938 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 24 12:44:30 np0005533938 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 24 12:44:30 np0005533938 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 24 12:44:30 np0005533938 kernel:  Device   empty
Nov 24 12:44:30 np0005533938 kernel: Movable zone start for each node
Nov 24 12:44:30 np0005533938 kernel: Early memory node ranges
Nov 24 12:44:30 np0005533938 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 24 12:44:30 np0005533938 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 24 12:44:30 np0005533938 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 24 12:44:30 np0005533938 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 24 12:44:30 np0005533938 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 24 12:44:30 np0005533938 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 24 12:44:30 np0005533938 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 24 12:44:30 np0005533938 kernel: ACPI: PM-Timer IO Port: 0x608
Nov 24 12:44:30 np0005533938 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 24 12:44:30 np0005533938 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 24 12:44:30 np0005533938 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 24 12:44:30 np0005533938 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 24 12:44:30 np0005533938 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 24 12:44:30 np0005533938 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 24 12:44:30 np0005533938 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 24 12:44:30 np0005533938 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 24 12:44:30 np0005533938 kernel: TSC deadline timer available
Nov 24 12:44:30 np0005533938 kernel: CPU topo: Max. logical packages:   8
Nov 24 12:44:30 np0005533938 kernel: CPU topo: Max. logical dies:       8
Nov 24 12:44:30 np0005533938 kernel: CPU topo: Max. dies per package:   1
Nov 24 12:44:30 np0005533938 kernel: CPU topo: Max. threads per core:   1
Nov 24 12:44:30 np0005533938 kernel: CPU topo: Num. cores per package:     1
Nov 24 12:44:30 np0005533938 kernel: CPU topo: Num. threads per package:   1
Nov 24 12:44:30 np0005533938 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 24 12:44:30 np0005533938 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 24 12:44:30 np0005533938 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 24 12:44:30 np0005533938 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 24 12:44:30 np0005533938 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 24 12:44:30 np0005533938 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 24 12:44:30 np0005533938 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 24 12:44:30 np0005533938 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 24 12:44:30 np0005533938 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 24 12:44:30 np0005533938 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 24 12:44:30 np0005533938 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 24 12:44:30 np0005533938 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 24 12:44:30 np0005533938 kernel: Booting paravirtualized kernel on KVM
Nov 24 12:44:30 np0005533938 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 24 12:44:30 np0005533938 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 24 12:44:30 np0005533938 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 24 12:44:30 np0005533938 kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 24 12:44:30 np0005533938 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 24 12:44:30 np0005533938 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64", will be passed to user space.
Nov 24 12:44:30 np0005533938 kernel: random: crng init done
Nov 24 12:44:30 np0005533938 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 24 12:44:30 np0005533938 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 24 12:44:30 np0005533938 kernel: Fallback order for Node 0: 0 
Nov 24 12:44:30 np0005533938 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 24 12:44:30 np0005533938 kernel: Policy zone: Normal
Nov 24 12:44:30 np0005533938 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 24 12:44:30 np0005533938 kernel: software IO TLB: area num 8.
Nov 24 12:44:30 np0005533938 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 24 12:44:30 np0005533938 kernel: ftrace: allocating 49298 entries in 193 pages
Nov 24 12:44:30 np0005533938 kernel: ftrace: allocated 193 pages with 3 groups
Nov 24 12:44:30 np0005533938 kernel: Dynamic Preempt: voluntary
Nov 24 12:44:30 np0005533938 kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 24 12:44:30 np0005533938 kernel: rcu: #011RCU event tracing is enabled.
Nov 24 12:44:30 np0005533938 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 24 12:44:30 np0005533938 kernel: #011Trampoline variant of Tasks RCU enabled.
Nov 24 12:44:30 np0005533938 kernel: #011Rude variant of Tasks RCU enabled.
Nov 24 12:44:30 np0005533938 kernel: #011Tracing variant of Tasks RCU enabled.
Nov 24 12:44:30 np0005533938 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 24 12:44:30 np0005533938 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 24 12:44:30 np0005533938 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 24 12:44:30 np0005533938 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 24 12:44:30 np0005533938 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 24 12:44:30 np0005533938 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 24 12:44:30 np0005533938 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 24 12:44:30 np0005533938 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 24 12:44:30 np0005533938 kernel: Console: colour VGA+ 80x25
Nov 24 12:44:30 np0005533938 kernel: printk: console [ttyS0] enabled
Nov 24 12:44:30 np0005533938 kernel: ACPI: Core revision 20230331
Nov 24 12:44:30 np0005533938 kernel: APIC: Switch to symmetric I/O mode setup
Nov 24 12:44:30 np0005533938 kernel: x2apic enabled
Nov 24 12:44:30 np0005533938 kernel: APIC: Switched APIC routing to: physical x2apic
Nov 24 12:44:30 np0005533938 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 24 12:44:30 np0005533938 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Nov 24 12:44:30 np0005533938 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 24 12:44:30 np0005533938 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 24 12:44:30 np0005533938 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 24 12:44:30 np0005533938 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 24 12:44:30 np0005533938 kernel: Spectre V2 : Mitigation: Retpolines
Nov 24 12:44:30 np0005533938 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 24 12:44:30 np0005533938 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 24 12:44:30 np0005533938 kernel: RETBleed: Mitigation: untrained return thunk
Nov 24 12:44:30 np0005533938 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 24 12:44:30 np0005533938 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 24 12:44:30 np0005533938 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 24 12:44:30 np0005533938 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 24 12:44:30 np0005533938 kernel: x86/bugs: return thunk changed
Nov 24 12:44:30 np0005533938 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 24 12:44:30 np0005533938 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 24 12:44:30 np0005533938 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 24 12:44:30 np0005533938 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 24 12:44:30 np0005533938 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 24 12:44:30 np0005533938 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 24 12:44:30 np0005533938 kernel: Freeing SMP alternatives memory: 40K
Nov 24 12:44:30 np0005533938 kernel: pid_max: default: 32768 minimum: 301
Nov 24 12:44:30 np0005533938 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 24 12:44:30 np0005533938 kernel: landlock: Up and running.
Nov 24 12:44:30 np0005533938 kernel: Yama: becoming mindful.
Nov 24 12:44:30 np0005533938 kernel: SELinux:  Initializing.
Nov 24 12:44:30 np0005533938 kernel: LSM support for eBPF active
Nov 24 12:44:30 np0005533938 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 24 12:44:30 np0005533938 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 24 12:44:30 np0005533938 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 24 12:44:30 np0005533938 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 24 12:44:30 np0005533938 kernel: ... version:                0
Nov 24 12:44:30 np0005533938 kernel: ... bit width:              48
Nov 24 12:44:30 np0005533938 kernel: ... generic registers:      6
Nov 24 12:44:30 np0005533938 kernel: ... value mask:             0000ffffffffffff
Nov 24 12:44:30 np0005533938 kernel: ... max period:             00007fffffffffff
Nov 24 12:44:30 np0005533938 kernel: ... fixed-purpose events:   0
Nov 24 12:44:30 np0005533938 kernel: ... event mask:             000000000000003f
Nov 24 12:44:30 np0005533938 kernel: signal: max sigframe size: 1776
Nov 24 12:44:30 np0005533938 kernel: rcu: Hierarchical SRCU implementation.
Nov 24 12:44:30 np0005533938 kernel: rcu: #011Max phase no-delay instances is 400.
Nov 24 12:44:30 np0005533938 kernel: smp: Bringing up secondary CPUs ...
Nov 24 12:44:30 np0005533938 kernel: smpboot: x86: Booting SMP configuration:
Nov 24 12:44:30 np0005533938 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 24 12:44:30 np0005533938 kernel: smp: Brought up 1 node, 8 CPUs
Nov 24 12:44:30 np0005533938 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Nov 24 12:44:30 np0005533938 kernel: node 0 deferred pages initialised in 9ms
Nov 24 12:44:30 np0005533938 kernel: Memory: 7765920K/8388068K available (16384K kernel code, 5786K rwdata, 13900K rodata, 4188K init, 7176K bss, 616268K reserved, 0K cma-reserved)
Nov 24 12:44:30 np0005533938 kernel: devtmpfs: initialized
Nov 24 12:44:30 np0005533938 kernel: x86/mm: Memory block size: 128MB
Nov 24 12:44:30 np0005533938 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 24 12:44:30 np0005533938 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 24 12:44:30 np0005533938 kernel: pinctrl core: initialized pinctrl subsystem
Nov 24 12:44:30 np0005533938 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 24 12:44:30 np0005533938 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 24 12:44:30 np0005533938 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 24 12:44:30 np0005533938 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 24 12:44:30 np0005533938 kernel: audit: initializing netlink subsys (disabled)
Nov 24 12:44:30 np0005533938 kernel: audit: type=2000 audit(1764006268.505:1): state=initialized audit_enabled=0 res=1
Nov 24 12:44:30 np0005533938 kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 24 12:44:30 np0005533938 kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 24 12:44:30 np0005533938 kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 24 12:44:30 np0005533938 kernel: cpuidle: using governor menu
Nov 24 12:44:30 np0005533938 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 24 12:44:30 np0005533938 kernel: PCI: Using configuration type 1 for base access
Nov 24 12:44:30 np0005533938 kernel: PCI: Using configuration type 1 for extended access
Nov 24 12:44:30 np0005533938 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 24 12:44:30 np0005533938 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 24 12:44:30 np0005533938 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 24 12:44:30 np0005533938 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 24 12:44:30 np0005533938 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 24 12:44:30 np0005533938 kernel: Demotion targets for Node 0: null
Nov 24 12:44:30 np0005533938 kernel: cryptd: max_cpu_qlen set to 1000
Nov 24 12:44:30 np0005533938 kernel: ACPI: Added _OSI(Module Device)
Nov 24 12:44:30 np0005533938 kernel: ACPI: Added _OSI(Processor Device)
Nov 24 12:44:30 np0005533938 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 24 12:44:30 np0005533938 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 24 12:44:30 np0005533938 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 24 12:44:30 np0005533938 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 24 12:44:30 np0005533938 kernel: ACPI: Interpreter enabled
Nov 24 12:44:30 np0005533938 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 24 12:44:30 np0005533938 kernel: ACPI: Using IOAPIC for interrupt routing
Nov 24 12:44:30 np0005533938 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 24 12:44:30 np0005533938 kernel: PCI: Using E820 reservations for host bridge windows
Nov 24 12:44:30 np0005533938 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 24 12:44:30 np0005533938 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 24 12:44:30 np0005533938 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [3] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [4] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [5] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [6] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [7] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [8] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [9] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [10] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [11] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [12] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [13] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [14] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [15] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [16] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [17] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [18] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [19] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [20] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [21] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [22] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [23] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [24] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [25] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [26] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [27] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [28] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [29] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [30] registered
Nov 24 12:44:30 np0005533938 kernel: acpiphp: Slot [31] registered
Nov 24 12:44:30 np0005533938 kernel: PCI host bridge to bus 0000:00
Nov 24 12:44:30 np0005533938 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 24 12:44:30 np0005533938 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 24 12:44:30 np0005533938 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 24 12:44:30 np0005533938 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 24 12:44:30 np0005533938 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 24 12:44:30 np0005533938 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 24 12:44:30 np0005533938 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 24 12:44:30 np0005533938 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 24 12:44:30 np0005533938 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 24 12:44:30 np0005533938 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 24 12:44:30 np0005533938 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 24 12:44:30 np0005533938 kernel: iommu: Default domain type: Translated
Nov 24 12:44:30 np0005533938 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 24 12:44:30 np0005533938 kernel: SCSI subsystem initialized
Nov 24 12:44:30 np0005533938 kernel: ACPI: bus type USB registered
Nov 24 12:44:30 np0005533938 kernel: usbcore: registered new interface driver usbfs
Nov 24 12:44:30 np0005533938 kernel: usbcore: registered new interface driver hub
Nov 24 12:44:30 np0005533938 kernel: usbcore: registered new device driver usb
Nov 24 12:44:30 np0005533938 kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 24 12:44:30 np0005533938 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 24 12:44:30 np0005533938 kernel: PTP clock support registered
Nov 24 12:44:30 np0005533938 kernel: EDAC MC: Ver: 3.0.0
Nov 24 12:44:30 np0005533938 kernel: NetLabel: Initializing
Nov 24 12:44:30 np0005533938 kernel: NetLabel:  domain hash size = 128
Nov 24 12:44:30 np0005533938 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 24 12:44:30 np0005533938 kernel: NetLabel:  unlabeled traffic allowed by default
Nov 24 12:44:30 np0005533938 kernel: PCI: Using ACPI for IRQ routing
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 24 12:44:30 np0005533938 kernel: vgaarb: loaded
Nov 24 12:44:30 np0005533938 kernel: clocksource: Switched to clocksource kvm-clock
Nov 24 12:44:30 np0005533938 kernel: VFS: Disk quotas dquot_6.6.0
Nov 24 12:44:30 np0005533938 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 24 12:44:30 np0005533938 kernel: pnp: PnP ACPI init
Nov 24 12:44:30 np0005533938 kernel: pnp: PnP ACPI: found 5 devices
Nov 24 12:44:30 np0005533938 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 24 12:44:30 np0005533938 kernel: NET: Registered PF_INET protocol family
Nov 24 12:44:30 np0005533938 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 24 12:44:30 np0005533938 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 24 12:44:30 np0005533938 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 24 12:44:30 np0005533938 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 24 12:44:30 np0005533938 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 24 12:44:30 np0005533938 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 24 12:44:30 np0005533938 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 24 12:44:30 np0005533938 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 24 12:44:30 np0005533938 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 24 12:44:30 np0005533938 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 24 12:44:30 np0005533938 kernel: NET: Registered PF_XDP protocol family
Nov 24 12:44:30 np0005533938 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 24 12:44:30 np0005533938 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 24 12:44:30 np0005533938 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 24 12:44:30 np0005533938 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 24 12:44:30 np0005533938 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 24 12:44:30 np0005533938 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 24 12:44:30 np0005533938 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 84869 usecs
Nov 24 12:44:30 np0005533938 kernel: PCI: CLS 0 bytes, default 64
Nov 24 12:44:30 np0005533938 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 24 12:44:30 np0005533938 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 24 12:44:30 np0005533938 kernel: ACPI: bus type thunderbolt registered
Nov 24 12:44:30 np0005533938 kernel: Trying to unpack rootfs image as initramfs...
Nov 24 12:44:30 np0005533938 kernel: Initialise system trusted keyrings
Nov 24 12:44:30 np0005533938 kernel: Key type blacklist registered
Nov 24 12:44:30 np0005533938 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 24 12:44:30 np0005533938 kernel: zbud: loaded
Nov 24 12:44:30 np0005533938 kernel: integrity: Platform Keyring initialized
Nov 24 12:44:30 np0005533938 kernel: integrity: Machine keyring initialized
Nov 24 12:44:30 np0005533938 kernel: Freeing initrd memory: 85868K
Nov 24 12:44:30 np0005533938 kernel: NET: Registered PF_ALG protocol family
Nov 24 12:44:30 np0005533938 kernel: xor: automatically using best checksumming function   avx       
Nov 24 12:44:30 np0005533938 kernel: Key type asymmetric registered
Nov 24 12:44:30 np0005533938 kernel: Asymmetric key parser 'x509' registered
Nov 24 12:44:30 np0005533938 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 24 12:44:30 np0005533938 kernel: io scheduler mq-deadline registered
Nov 24 12:44:30 np0005533938 kernel: io scheduler kyber registered
Nov 24 12:44:30 np0005533938 kernel: io scheduler bfq registered
Nov 24 12:44:30 np0005533938 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 24 12:44:30 np0005533938 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 24 12:44:30 np0005533938 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 24 12:44:30 np0005533938 kernel: ACPI: button: Power Button [PWRF]
Nov 24 12:44:30 np0005533938 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 24 12:44:30 np0005533938 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 24 12:44:30 np0005533938 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 24 12:44:30 np0005533938 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 24 12:44:30 np0005533938 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 24 12:44:30 np0005533938 kernel: Non-volatile memory driver v1.3
Nov 24 12:44:30 np0005533938 kernel: rdac: device handler registered
Nov 24 12:44:30 np0005533938 kernel: hp_sw: device handler registered
Nov 24 12:44:30 np0005533938 kernel: emc: device handler registered
Nov 24 12:44:30 np0005533938 kernel: alua: device handler registered
Nov 24 12:44:30 np0005533938 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 24 12:44:30 np0005533938 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 24 12:44:30 np0005533938 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 24 12:44:30 np0005533938 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 24 12:44:30 np0005533938 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 24 12:44:30 np0005533938 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 24 12:44:30 np0005533938 kernel: usb usb1: Product: UHCI Host Controller
Nov 24 12:44:30 np0005533938 kernel: usb usb1: Manufacturer: Linux 5.14.0-639.el9.x86_64 uhci_hcd
Nov 24 12:44:30 np0005533938 kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 24 12:44:30 np0005533938 kernel: hub 1-0:1.0: USB hub found
Nov 24 12:44:30 np0005533938 kernel: hub 1-0:1.0: 2 ports detected
Nov 24 12:44:30 np0005533938 kernel: usbcore: registered new interface driver usbserial_generic
Nov 24 12:44:30 np0005533938 kernel: usbserial: USB Serial support registered for generic
Nov 24 12:44:30 np0005533938 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 24 12:44:30 np0005533938 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 24 12:44:30 np0005533938 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 24 12:44:30 np0005533938 kernel: mousedev: PS/2 mouse device common for all mice
Nov 24 12:44:30 np0005533938 kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 24 12:44:30 np0005533938 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 24 12:44:30 np0005533938 kernel: rtc_cmos 00:04: registered as rtc0
Nov 24 12:44:30 np0005533938 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 24 12:44:30 np0005533938 kernel: rtc_cmos 00:04: setting system clock to 2025-11-24T17:44:29 UTC (1764006269)
Nov 24 12:44:30 np0005533938 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 24 12:44:30 np0005533938 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 24 12:44:30 np0005533938 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 24 12:44:30 np0005533938 kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 24 12:44:30 np0005533938 kernel: usbcore: registered new interface driver usbhid
Nov 24 12:44:30 np0005533938 kernel: usbhid: USB HID core driver
Nov 24 12:44:30 np0005533938 kernel: drop_monitor: Initializing network drop monitor service
Nov 24 12:44:30 np0005533938 kernel: Initializing XFRM netlink socket
Nov 24 12:44:30 np0005533938 kernel: NET: Registered PF_INET6 protocol family
Nov 24 12:44:30 np0005533938 kernel: Segment Routing with IPv6
Nov 24 12:44:30 np0005533938 kernel: NET: Registered PF_PACKET protocol family
Nov 24 12:44:30 np0005533938 kernel: mpls_gso: MPLS GSO support
Nov 24 12:44:30 np0005533938 kernel: IPI shorthand broadcast: enabled
Nov 24 12:44:30 np0005533938 kernel: AVX2 version of gcm_enc/dec engaged.
Nov 24 12:44:30 np0005533938 kernel: AES CTR mode by8 optimization enabled
Nov 24 12:44:30 np0005533938 kernel: sched_clock: Marking stable (1236010790, 149813289)->(1494250839, -108426760)
Nov 24 12:44:30 np0005533938 kernel: registered taskstats version 1
Nov 24 12:44:30 np0005533938 kernel: Loading compiled-in X.509 certificates
Nov 24 12:44:30 np0005533938 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 24 12:44:30 np0005533938 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 24 12:44:30 np0005533938 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 24 12:44:30 np0005533938 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 24 12:44:30 np0005533938 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 24 12:44:30 np0005533938 kernel: Demotion targets for Node 0: null
Nov 24 12:44:30 np0005533938 kernel: page_owner is disabled
Nov 24 12:44:30 np0005533938 kernel: Key type .fscrypt registered
Nov 24 12:44:30 np0005533938 kernel: Key type fscrypt-provisioning registered
Nov 24 12:44:30 np0005533938 kernel: Key type big_key registered
Nov 24 12:44:30 np0005533938 kernel: Key type encrypted registered
Nov 24 12:44:30 np0005533938 kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 24 12:44:30 np0005533938 kernel: Loading compiled-in module X.509 certificates
Nov 24 12:44:30 np0005533938 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 24 12:44:30 np0005533938 kernel: ima: Allocated hash algorithm: sha256
Nov 24 12:44:30 np0005533938 kernel: ima: No architecture policies found
Nov 24 12:44:30 np0005533938 kernel: evm: Initialising EVM extended attributes:
Nov 24 12:44:30 np0005533938 kernel: evm: security.selinux
Nov 24 12:44:30 np0005533938 kernel: evm: security.SMACK64 (disabled)
Nov 24 12:44:30 np0005533938 kernel: evm: security.SMACK64EXEC (disabled)
Nov 24 12:44:30 np0005533938 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 24 12:44:30 np0005533938 kernel: evm: security.SMACK64MMAP (disabled)
Nov 24 12:44:30 np0005533938 kernel: evm: security.apparmor (disabled)
Nov 24 12:44:30 np0005533938 kernel: evm: security.ima
Nov 24 12:44:30 np0005533938 kernel: evm: security.capability
Nov 24 12:44:30 np0005533938 kernel: evm: HMAC attrs: 0x1
Nov 24 12:44:30 np0005533938 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 24 12:44:30 np0005533938 kernel: Running certificate verification RSA selftest
Nov 24 12:44:30 np0005533938 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 24 12:44:30 np0005533938 kernel: Running certificate verification ECDSA selftest
Nov 24 12:44:30 np0005533938 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 24 12:44:30 np0005533938 kernel: clk: Disabling unused clocks
Nov 24 12:44:30 np0005533938 kernel: Freeing unused decrypted memory: 2028K
Nov 24 12:44:30 np0005533938 kernel: Freeing unused kernel image (initmem) memory: 4188K
Nov 24 12:44:30 np0005533938 kernel: Write protecting the kernel read-only data: 30720k
Nov 24 12:44:30 np0005533938 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 24 12:44:30 np0005533938 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 24 12:44:30 np0005533938 kernel: Run /init as init process
Nov 24 12:44:30 np0005533938 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 24 12:44:30 np0005533938 systemd: Detected virtualization kvm.
Nov 24 12:44:30 np0005533938 systemd: Detected architecture x86-64.
Nov 24 12:44:30 np0005533938 systemd: Running in initrd.
Nov 24 12:44:30 np0005533938 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 24 12:44:30 np0005533938 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 24 12:44:30 np0005533938 kernel: usb 1-1: Product: QEMU USB Tablet
Nov 24 12:44:30 np0005533938 kernel: usb 1-1: Manufacturer: QEMU
Nov 24 12:44:30 np0005533938 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 24 12:44:30 np0005533938 systemd: No hostname configured, using default hostname.
Nov 24 12:44:30 np0005533938 systemd: Hostname set to <localhost>.
Nov 24 12:44:30 np0005533938 systemd: Initializing machine ID from VM UUID.
Nov 24 12:44:30 np0005533938 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 24 12:44:30 np0005533938 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 24 12:44:30 np0005533938 systemd: Queued start job for default target Initrd Default Target.
Nov 24 12:44:30 np0005533938 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 24 12:44:30 np0005533938 systemd: Reached target Local Encrypted Volumes.
Nov 24 12:44:30 np0005533938 systemd: Reached target Initrd /usr File System.
Nov 24 12:44:30 np0005533938 systemd: Reached target Local File Systems.
Nov 24 12:44:30 np0005533938 systemd: Reached target Path Units.
Nov 24 12:44:30 np0005533938 systemd: Reached target Slice Units.
Nov 24 12:44:30 np0005533938 systemd: Reached target Swaps.
Nov 24 12:44:30 np0005533938 systemd: Reached target Timer Units.
Nov 24 12:44:30 np0005533938 systemd: Listening on D-Bus System Message Bus Socket.
Nov 24 12:44:30 np0005533938 systemd: Listening on Journal Socket (/dev/log).
Nov 24 12:44:30 np0005533938 systemd: Listening on Journal Socket.
Nov 24 12:44:30 np0005533938 systemd: Listening on udev Control Socket.
Nov 24 12:44:30 np0005533938 systemd: Listening on udev Kernel Socket.
Nov 24 12:44:30 np0005533938 systemd: Reached target Socket Units.
Nov 24 12:44:30 np0005533938 systemd: Starting Create List of Static Device Nodes...
Nov 24 12:44:30 np0005533938 systemd: Starting Journal Service...
Nov 24 12:44:30 np0005533938 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 24 12:44:30 np0005533938 systemd: Starting Apply Kernel Variables...
Nov 24 12:44:30 np0005533938 systemd: Starting Create System Users...
Nov 24 12:44:30 np0005533938 systemd: Starting Setup Virtual Console...
Nov 24 12:44:30 np0005533938 systemd: Finished Create List of Static Device Nodes.
Nov 24 12:44:30 np0005533938 systemd: Finished Apply Kernel Variables.
Nov 24 12:44:30 np0005533938 systemd: Finished Create System Users.
Nov 24 12:44:30 np0005533938 systemd-journald[309]: Journal started
Nov 24 12:44:30 np0005533938 systemd-journald[309]: Runtime Journal (/run/log/journal/ce8f254e4b984140abc78040b35476ad) is 8.0M, max 153.6M, 145.6M free.
Nov 24 12:44:30 np0005533938 systemd-sysusers[314]: Creating group 'users' with GID 100.
Nov 24 12:44:30 np0005533938 systemd-sysusers[314]: Creating group 'dbus' with GID 81.
Nov 24 12:44:30 np0005533938 systemd-sysusers[314]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 24 12:44:30 np0005533938 systemd: Started Journal Service.
Nov 24 12:44:30 np0005533938 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 24 12:44:30 np0005533938 systemd[1]: Starting Create Volatile Files and Directories...
Nov 24 12:44:30 np0005533938 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 24 12:44:30 np0005533938 systemd[1]: Finished Create Volatile Files and Directories.
Nov 24 12:44:30 np0005533938 systemd[1]: Finished Setup Virtual Console.
Nov 24 12:44:30 np0005533938 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 24 12:44:30 np0005533938 systemd[1]: Starting dracut cmdline hook...
Nov 24 12:44:30 np0005533938 dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Nov 24 12:44:30 np0005533938 dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 24 12:44:30 np0005533938 systemd[1]: Finished dracut cmdline hook.
Nov 24 12:44:30 np0005533938 systemd[1]: Starting dracut pre-udev hook...
Nov 24 12:44:30 np0005533938 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 24 12:44:30 np0005533938 kernel: device-mapper: uevent: version 1.0.3
Nov 24 12:44:30 np0005533938 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 24 12:44:30 np0005533938 kernel: RPC: Registered named UNIX socket transport module.
Nov 24 12:44:30 np0005533938 kernel: RPC: Registered udp transport module.
Nov 24 12:44:30 np0005533938 kernel: RPC: Registered tcp transport module.
Nov 24 12:44:30 np0005533938 kernel: RPC: Registered tcp-with-tls transport module.
Nov 24 12:44:30 np0005533938 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 24 12:44:30 np0005533938 rpc.statd[446]: Version 2.5.4 starting
Nov 24 12:44:30 np0005533938 rpc.statd[446]: Initializing NSM state
Nov 24 12:44:30 np0005533938 rpc.idmapd[451]: Setting log level to 0
Nov 24 12:44:30 np0005533938 systemd[1]: Finished dracut pre-udev hook.
Nov 24 12:44:30 np0005533938 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 24 12:44:30 np0005533938 systemd-udevd[464]: Using default interface naming scheme 'rhel-9.0'.
Nov 24 12:44:30 np0005533938 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 24 12:44:30 np0005533938 systemd[1]: Starting dracut pre-trigger hook...
Nov 24 12:44:30 np0005533938 systemd[1]: Finished dracut pre-trigger hook.
Nov 24 12:44:30 np0005533938 systemd[1]: Starting Coldplug All udev Devices...
Nov 24 12:44:30 np0005533938 systemd[1]: Created slice Slice /system/modprobe.
Nov 24 12:44:30 np0005533938 systemd[1]: Starting Load Kernel Module configfs...
Nov 24 12:44:30 np0005533938 systemd[1]: Finished Coldplug All udev Devices.
Nov 24 12:44:30 np0005533938 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 24 12:44:30 np0005533938 systemd[1]: Finished Load Kernel Module configfs.
Nov 24 12:44:30 np0005533938 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 24 12:44:30 np0005533938 systemd[1]: Reached target Network.
Nov 24 12:44:30 np0005533938 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 24 12:44:30 np0005533938 systemd[1]: Starting dracut initqueue hook...
Nov 24 12:44:30 np0005533938 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 24 12:44:30 np0005533938 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 24 12:44:31 np0005533938 kernel: scsi host0: ata_piix
Nov 24 12:44:31 np0005533938 kernel: scsi host1: ata_piix
Nov 24 12:44:31 np0005533938 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 24 12:44:31 np0005533938 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 24 12:44:31 np0005533938 kernel: vda: vda1
Nov 24 12:44:31 np0005533938 systemd[1]: Mounting Kernel Configuration File System...
Nov 24 12:44:31 np0005533938 systemd[1]: Mounted Kernel Configuration File System.
Nov 24 12:44:31 np0005533938 systemd[1]: Reached target System Initialization.
Nov 24 12:44:31 np0005533938 systemd[1]: Reached target Basic System.
Nov 24 12:44:31 np0005533938 kernel: ata1: found unknown device (class 0)
Nov 24 12:44:31 np0005533938 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 24 12:44:31 np0005533938 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 24 12:44:31 np0005533938 systemd-udevd[490]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 12:44:31 np0005533938 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 24 12:44:31 np0005533938 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 24 12:44:31 np0005533938 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 24 12:44:31 np0005533938 systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 24 12:44:31 np0005533938 systemd[1]: Reached target Initrd Root Device.
Nov 24 12:44:31 np0005533938 systemd[1]: Finished dracut initqueue hook.
Nov 24 12:44:31 np0005533938 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 24 12:44:31 np0005533938 systemd[1]: Reached target Remote Encrypted Volumes.
Nov 24 12:44:31 np0005533938 systemd[1]: Reached target Remote File Systems.
Nov 24 12:44:31 np0005533938 systemd[1]: Starting dracut pre-mount hook...
Nov 24 12:44:31 np0005533938 systemd[1]: Finished dracut pre-mount hook.
Nov 24 12:44:31 np0005533938 systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 24 12:44:31 np0005533938 systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Nov 24 12:44:31 np0005533938 systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 24 12:44:31 np0005533938 systemd[1]: Mounting /sysroot...
Nov 24 12:44:32 np0005533938 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 24 12:44:32 np0005533938 kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 24 12:44:32 np0005533938 kernel: XFS (vda1): Ending clean mount
Nov 24 12:44:32 np0005533938 systemd[1]: Mounted /sysroot.
Nov 24 12:44:32 np0005533938 systemd[1]: Reached target Initrd Root File System.
Nov 24 12:44:32 np0005533938 systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 24 12:44:32 np0005533938 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 24 12:44:32 np0005533938 systemd[1]: Reached target Initrd File Systems.
Nov 24 12:44:32 np0005533938 systemd[1]: Reached target Initrd Default Target.
Nov 24 12:44:32 np0005533938 systemd[1]: Starting dracut mount hook...
Nov 24 12:44:32 np0005533938 systemd[1]: Finished dracut mount hook.
Nov 24 12:44:32 np0005533938 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 24 12:44:32 np0005533938 rpc.idmapd[451]: exiting on signal 15
Nov 24 12:44:32 np0005533938 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 24 12:44:32 np0005533938 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped target Network.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped target Timer Units.
Nov 24 12:44:32 np0005533938 systemd[1]: dbus.socket: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 24 12:44:32 np0005533938 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped target Initrd Default Target.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped target Basic System.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped target Initrd Root Device.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped target Initrd /usr File System.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped target Path Units.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped target Remote File Systems.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped target Slice Units.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped target Socket Units.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped target System Initialization.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped target Local File Systems.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped target Swaps.
Nov 24 12:44:32 np0005533938 systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped dracut mount hook.
Nov 24 12:44:32 np0005533938 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped dracut pre-mount hook.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped target Local Encrypted Volumes.
Nov 24 12:44:32 np0005533938 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 24 12:44:32 np0005533938 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped dracut initqueue hook.
Nov 24 12:44:32 np0005533938 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped Apply Kernel Variables.
Nov 24 12:44:32 np0005533938 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped Create Volatile Files and Directories.
Nov 24 12:44:32 np0005533938 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped Coldplug All udev Devices.
Nov 24 12:44:32 np0005533938 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped dracut pre-trigger hook.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 24 12:44:32 np0005533938 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped Setup Virtual Console.
Nov 24 12:44:32 np0005533938 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 24 12:44:32 np0005533938 systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 24 12:44:32 np0005533938 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: Closed udev Control Socket.
Nov 24 12:44:32 np0005533938 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: Closed udev Kernel Socket.
Nov 24 12:44:32 np0005533938 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped dracut pre-udev hook.
Nov 24 12:44:32 np0005533938 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped dracut cmdline hook.
Nov 24 12:44:32 np0005533938 systemd[1]: Starting Cleanup udev Database...
Nov 24 12:44:32 np0005533938 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 24 12:44:32 np0005533938 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped Create List of Static Device Nodes.
Nov 24 12:44:32 np0005533938 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: Stopped Create System Users.
Nov 24 12:44:32 np0005533938 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 24 12:44:32 np0005533938 systemd[1]: Finished Cleanup udev Database.
Nov 24 12:44:32 np0005533938 systemd[1]: Reached target Switch Root.
Nov 24 12:44:32 np0005533938 systemd[1]: Starting Switch Root...
Nov 24 12:44:32 np0005533938 systemd[1]: Switching root.
Nov 24 12:44:32 np0005533938 systemd-journald[309]: Journal stopped
Nov 24 12:44:35 np0005533938 systemd-journald: Received SIGTERM from PID 1 (systemd).
Nov 24 12:44:35 np0005533938 kernel: audit: type=1404 audit(1764006273.241:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 24 12:44:35 np0005533938 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 12:44:35 np0005533938 kernel: SELinux:  policy capability open_perms=1
Nov 24 12:44:35 np0005533938 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 12:44:35 np0005533938 kernel: SELinux:  policy capability always_check_network=0
Nov 24 12:44:35 np0005533938 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 12:44:35 np0005533938 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 12:44:35 np0005533938 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 12:44:35 np0005533938 kernel: audit: type=1403 audit(1764006273.427:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 24 12:44:35 np0005533938 systemd: Successfully loaded SELinux policy in 192.690ms.
Nov 24 12:44:35 np0005533938 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 35.774ms.
Nov 24 12:44:35 np0005533938 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 24 12:44:35 np0005533938 systemd: Detected virtualization kvm.
Nov 24 12:44:35 np0005533938 systemd: Detected architecture x86-64.
Nov 24 12:44:35 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 12:44:35 np0005533938 systemd: initrd-switch-root.service: Deactivated successfully.
Nov 24 12:44:35 np0005533938 systemd: Stopped Switch Root.
Nov 24 12:44:35 np0005533938 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 24 12:44:35 np0005533938 systemd: Created slice Slice /system/getty.
Nov 24 12:44:35 np0005533938 systemd: Created slice Slice /system/serial-getty.
Nov 24 12:44:35 np0005533938 systemd: Created slice Slice /system/sshd-keygen.
Nov 24 12:44:35 np0005533938 systemd: Created slice User and Session Slice.
Nov 24 12:44:35 np0005533938 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 24 12:44:35 np0005533938 systemd: Started Forward Password Requests to Wall Directory Watch.
Nov 24 12:44:35 np0005533938 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 24 12:44:35 np0005533938 systemd: Reached target Local Encrypted Volumes.
Nov 24 12:44:35 np0005533938 systemd: Stopped target Switch Root.
Nov 24 12:44:35 np0005533938 systemd: Stopped target Initrd File Systems.
Nov 24 12:44:35 np0005533938 systemd: Stopped target Initrd Root File System.
Nov 24 12:44:35 np0005533938 systemd: Reached target Local Integrity Protected Volumes.
Nov 24 12:44:35 np0005533938 systemd: Reached target Path Units.
Nov 24 12:44:35 np0005533938 systemd: Reached target rpc_pipefs.target.
Nov 24 12:44:35 np0005533938 systemd: Reached target Slice Units.
Nov 24 12:44:35 np0005533938 systemd: Reached target Swaps.
Nov 24 12:44:35 np0005533938 systemd: Reached target Local Verity Protected Volumes.
Nov 24 12:44:35 np0005533938 systemd: Listening on RPCbind Server Activation Socket.
Nov 24 12:44:35 np0005533938 systemd: Reached target RPC Port Mapper.
Nov 24 12:44:35 np0005533938 systemd: Listening on Process Core Dump Socket.
Nov 24 12:44:35 np0005533938 systemd: Listening on initctl Compatibility Named Pipe.
Nov 24 12:44:35 np0005533938 systemd: Listening on udev Control Socket.
Nov 24 12:44:35 np0005533938 systemd: Listening on udev Kernel Socket.
Nov 24 12:44:35 np0005533938 systemd: Mounting Huge Pages File System...
Nov 24 12:44:35 np0005533938 systemd: Mounting POSIX Message Queue File System...
Nov 24 12:44:35 np0005533938 systemd: Mounting Kernel Debug File System...
Nov 24 12:44:35 np0005533938 systemd: Mounting Kernel Trace File System...
Nov 24 12:44:35 np0005533938 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 24 12:44:35 np0005533938 systemd: Starting Create List of Static Device Nodes...
Nov 24 12:44:35 np0005533938 systemd: Starting Load Kernel Module configfs...
Nov 24 12:44:35 np0005533938 systemd: Starting Load Kernel Module drm...
Nov 24 12:44:35 np0005533938 systemd: Starting Load Kernel Module efi_pstore...
Nov 24 12:44:35 np0005533938 systemd: Starting Load Kernel Module fuse...
Nov 24 12:44:35 np0005533938 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 24 12:44:35 np0005533938 systemd: systemd-fsck-root.service: Deactivated successfully.
Nov 24 12:44:35 np0005533938 systemd: Stopped File System Check on Root Device.
Nov 24 12:44:35 np0005533938 systemd: Stopped Journal Service.
Nov 24 12:44:35 np0005533938 kernel: fuse: init (API version 7.37)
Nov 24 12:44:35 np0005533938 systemd: Starting Journal Service...
Nov 24 12:44:35 np0005533938 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 24 12:44:35 np0005533938 systemd: Starting Generate network units from Kernel command line...
Nov 24 12:44:35 np0005533938 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 24 12:44:35 np0005533938 systemd: Starting Remount Root and Kernel File Systems...
Nov 24 12:44:35 np0005533938 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 24 12:44:35 np0005533938 systemd: Starting Apply Kernel Variables...
Nov 24 12:44:35 np0005533938 systemd: Starting Coldplug All udev Devices...
Nov 24 12:44:35 np0005533938 systemd: Mounted Huge Pages File System.
Nov 24 12:44:35 np0005533938 systemd: Mounted POSIX Message Queue File System.
Nov 24 12:44:35 np0005533938 systemd: Mounted Kernel Debug File System.
Nov 24 12:44:35 np0005533938 systemd: Mounted Kernel Trace File System.
Nov 24 12:44:35 np0005533938 systemd: Finished Create List of Static Device Nodes.
Nov 24 12:44:35 np0005533938 kernel: ACPI: bus type drm_connector registered
Nov 24 12:44:35 np0005533938 systemd: modprobe@configfs.service: Deactivated successfully.
Nov 24 12:44:35 np0005533938 systemd: Finished Load Kernel Module configfs.
Nov 24 12:44:35 np0005533938 systemd: modprobe@drm.service: Deactivated successfully.
Nov 24 12:44:35 np0005533938 systemd: Finished Load Kernel Module drm.
Nov 24 12:44:35 np0005533938 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 24 12:44:35 np0005533938 systemd-journald[677]: Journal started
Nov 24 12:44:35 np0005533938 systemd-journald[677]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 24 12:44:35 np0005533938 systemd[1]: Queued start job for default target Multi-User System.
Nov 24 12:44:35 np0005533938 systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 24 12:44:35 np0005533938 systemd: Started Journal Service.
Nov 24 12:44:35 np0005533938 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 24 12:44:35 np0005533938 systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 24 12:44:35 np0005533938 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 24 12:44:35 np0005533938 systemd[1]: Finished Load Kernel Module fuse.
Nov 24 12:44:35 np0005533938 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 24 12:44:35 np0005533938 systemd[1]: Finished Generate network units from Kernel command line.
Nov 24 12:44:35 np0005533938 systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 24 12:44:35 np0005533938 systemd[1]: Mounting FUSE Control File System...
Nov 24 12:44:35 np0005533938 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 24 12:44:35 np0005533938 systemd[1]: Starting Rebuild Hardware Database...
Nov 24 12:44:35 np0005533938 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 24 12:44:35 np0005533938 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 24 12:44:35 np0005533938 systemd[1]: Starting Load/Save OS Random Seed...
Nov 24 12:44:35 np0005533938 systemd[1]: Starting Create System Users...
Nov 24 12:44:35 np0005533938 systemd[1]: Finished Apply Kernel Variables.
Nov 24 12:44:35 np0005533938 systemd[1]: Mounted FUSE Control File System.
Nov 24 12:44:35 np0005533938 systemd[1]: Finished Coldplug All udev Devices.
Nov 24 12:44:35 np0005533938 systemd-journald[677]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 24 12:44:35 np0005533938 systemd-journald[677]: Received client request to flush runtime journal.
Nov 24 12:44:35 np0005533938 systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 24 12:44:36 np0005533938 systemd[1]: Finished Create System Users.
Nov 24 12:44:36 np0005533938 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 24 12:44:36 np0005533938 systemd[1]: Finished Load/Save OS Random Seed.
Nov 24 12:44:36 np0005533938 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 24 12:44:36 np0005533938 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 24 12:44:36 np0005533938 systemd[1]: Reached target Preparation for Local File Systems.
Nov 24 12:44:36 np0005533938 systemd[1]: Reached target Local File Systems.
Nov 24 12:44:36 np0005533938 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 24 12:44:36 np0005533938 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 24 12:44:36 np0005533938 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 24 12:44:36 np0005533938 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 24 12:44:36 np0005533938 systemd[1]: Starting Automatic Boot Loader Update...
Nov 24 12:44:36 np0005533938 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 24 12:44:36 np0005533938 systemd[1]: Starting Create Volatile Files and Directories...
Nov 24 12:44:36 np0005533938 bootctl[695]: Couldn't find EFI system partition, skipping.
Nov 24 12:44:36 np0005533938 systemd[1]: Finished Automatic Boot Loader Update.
Nov 24 12:44:36 np0005533938 systemd[1]: Finished Create Volatile Files and Directories.
Nov 24 12:44:36 np0005533938 systemd[1]: Starting Security Auditing Service...
Nov 24 12:44:36 np0005533938 systemd[1]: Starting RPC Bind...
Nov 24 12:44:36 np0005533938 systemd[1]: Starting Rebuild Journal Catalog...
Nov 24 12:44:36 np0005533938 systemd[1]: Finished Rebuild Journal Catalog.
Nov 24 12:44:36 np0005533938 auditd[701]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 24 12:44:36 np0005533938 auditd[701]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 24 12:44:36 np0005533938 systemd[1]: Started RPC Bind.
Nov 24 12:44:36 np0005533938 augenrules[706]: /sbin/augenrules: No change
Nov 24 12:44:36 np0005533938 augenrules[721]: No rules
Nov 24 12:44:36 np0005533938 augenrules[721]: enabled 1
Nov 24 12:44:36 np0005533938 augenrules[721]: failure 1
Nov 24 12:44:36 np0005533938 augenrules[721]: pid 701
Nov 24 12:44:36 np0005533938 augenrules[721]: rate_limit 0
Nov 24 12:44:36 np0005533938 augenrules[721]: backlog_limit 8192
Nov 24 12:44:36 np0005533938 augenrules[721]: lost 0
Nov 24 12:44:36 np0005533938 augenrules[721]: backlog 3
Nov 24 12:44:36 np0005533938 augenrules[721]: backlog_wait_time 60000
Nov 24 12:44:36 np0005533938 augenrules[721]: backlog_wait_time_actual 0
Nov 24 12:44:36 np0005533938 augenrules[721]: enabled 1
Nov 24 12:44:36 np0005533938 augenrules[721]: failure 1
Nov 24 12:44:36 np0005533938 augenrules[721]: pid 701
Nov 24 12:44:36 np0005533938 augenrules[721]: rate_limit 0
Nov 24 12:44:36 np0005533938 augenrules[721]: backlog_limit 8192
Nov 24 12:44:36 np0005533938 augenrules[721]: lost 0
Nov 24 12:44:36 np0005533938 augenrules[721]: backlog 0
Nov 24 12:44:36 np0005533938 augenrules[721]: backlog_wait_time 60000
Nov 24 12:44:36 np0005533938 augenrules[721]: backlog_wait_time_actual 0
Nov 24 12:44:36 np0005533938 augenrules[721]: enabled 1
Nov 24 12:44:36 np0005533938 augenrules[721]: failure 1
Nov 24 12:44:36 np0005533938 augenrules[721]: pid 701
Nov 24 12:44:36 np0005533938 augenrules[721]: rate_limit 0
Nov 24 12:44:36 np0005533938 augenrules[721]: backlog_limit 8192
Nov 24 12:44:36 np0005533938 augenrules[721]: lost 0
Nov 24 12:44:36 np0005533938 augenrules[721]: backlog 0
Nov 24 12:44:36 np0005533938 augenrules[721]: backlog_wait_time 60000
Nov 24 12:44:36 np0005533938 augenrules[721]: backlog_wait_time_actual 0
Nov 24 12:44:36 np0005533938 systemd[1]: Started Security Auditing Service.
Nov 24 12:44:36 np0005533938 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 24 12:44:36 np0005533938 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 24 12:44:37 np0005533938 systemd[1]: Finished Rebuild Hardware Database.
Nov 24 12:44:37 np0005533938 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 24 12:44:37 np0005533938 systemd-udevd[730]: Using default interface naming scheme 'rhel-9.0'.
Nov 24 12:44:37 np0005533938 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 24 12:44:37 np0005533938 systemd[1]: Starting Load Kernel Module configfs...
Nov 24 12:44:37 np0005533938 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 24 12:44:37 np0005533938 systemd[1]: Finished Load Kernel Module configfs.
Nov 24 12:44:37 np0005533938 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 24 12:44:37 np0005533938 systemd-udevd[740]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 12:44:37 np0005533938 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 24 12:44:37 np0005533938 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 24 12:44:37 np0005533938 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 24 12:44:37 np0005533938 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 24 12:44:37 np0005533938 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 24 12:44:37 np0005533938 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 24 12:44:37 np0005533938 kernel: Console: switching to colour dummy device 80x25
Nov 24 12:44:37 np0005533938 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 24 12:44:37 np0005533938 kernel: [drm] features: -context_init
Nov 24 12:44:37 np0005533938 kernel: [drm] number of scanouts: 1
Nov 24 12:44:37 np0005533938 kernel: [drm] number of cap sets: 0
Nov 24 12:44:37 np0005533938 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 24 12:44:37 np0005533938 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 24 12:44:37 np0005533938 kernel: Console: switching to colour frame buffer device 128x48
Nov 24 12:44:37 np0005533938 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 24 12:44:37 np0005533938 kernel: kvm_amd: TSC scaling supported
Nov 24 12:44:37 np0005533938 kernel: kvm_amd: Nested Virtualization enabled
Nov 24 12:44:37 np0005533938 kernel: kvm_amd: Nested Paging enabled
Nov 24 12:44:37 np0005533938 kernel: kvm_amd: LBR virtualization supported
Nov 24 12:44:38 np0005533938 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 24 12:44:38 np0005533938 systemd[1]: Starting Update is Completed...
Nov 24 12:44:38 np0005533938 systemd[1]: Finished Update is Completed.
Nov 24 12:44:38 np0005533938 systemd[1]: Reached target System Initialization.
Nov 24 12:44:38 np0005533938 systemd[1]: Started dnf makecache --timer.
Nov 24 12:44:38 np0005533938 systemd[1]: Started Daily rotation of log files.
Nov 24 12:44:38 np0005533938 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 24 12:44:38 np0005533938 systemd[1]: Reached target Timer Units.
Nov 24 12:44:38 np0005533938 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 24 12:44:38 np0005533938 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 24 12:44:38 np0005533938 systemd[1]: Reached target Socket Units.
Nov 24 12:44:38 np0005533938 systemd[1]: Starting D-Bus System Message Bus...
Nov 24 12:44:38 np0005533938 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 24 12:44:38 np0005533938 systemd[1]: Started D-Bus System Message Bus.
Nov 24 12:44:38 np0005533938 systemd[1]: Reached target Basic System.
Nov 24 12:44:38 np0005533938 dbus-broker-lau[812]: Ready
Nov 24 12:44:38 np0005533938 systemd[1]: Starting NTP client/server...
Nov 24 12:44:38 np0005533938 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 24 12:44:38 np0005533938 systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 24 12:44:38 np0005533938 systemd[1]: Starting IPv4 firewall with iptables...
Nov 24 12:44:38 np0005533938 systemd[1]: Started irqbalance daemon.
Nov 24 12:44:38 np0005533938 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 24 12:44:38 np0005533938 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 12:44:38 np0005533938 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 12:44:38 np0005533938 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 12:44:38 np0005533938 systemd[1]: Reached target sshd-keygen.target.
Nov 24 12:44:38 np0005533938 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 24 12:44:38 np0005533938 systemd[1]: Reached target User and Group Name Lookups.
Nov 24 12:44:38 np0005533938 systemd[1]: Starting User Login Management...
Nov 24 12:44:38 np0005533938 systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 24 12:44:39 np0005533938 systemd-logind[822]: New seat seat0.
Nov 24 12:44:39 np0005533938 systemd-logind[822]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 24 12:44:39 np0005533938 systemd-logind[822]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 24 12:44:39 np0005533938 systemd[1]: Started User Login Management.
Nov 24 12:44:39 np0005533938 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 24 12:44:39 np0005533938 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 24 12:44:39 np0005533938 chronyd[831]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 24 12:44:39 np0005533938 chronyd[831]: Loaded 0 symmetric keys
Nov 24 12:44:39 np0005533938 chronyd[831]: Using right/UTC timezone to obtain leap second data
Nov 24 12:44:39 np0005533938 chronyd[831]: Loaded seccomp filter (level 2)
Nov 24 12:44:39 np0005533938 systemd[1]: Started NTP client/server.
Nov 24 12:44:39 np0005533938 iptables.init[817]: iptables: Applying firewall rules: [  OK  ]
Nov 24 12:44:39 np0005533938 systemd[1]: Finished IPv4 firewall with iptables.
Nov 24 12:44:41 np0005533938 cloud-init[840]: Cloud-init v. 24.4-7.el9 running 'init-local' at Mon, 24 Nov 2025 17:44:41 +0000. Up 13.16 seconds.
Nov 24 12:44:42 np0005533938 systemd[1]: run-cloud\x2dinit-tmp-tmpasocghyj.mount: Deactivated successfully.
Nov 24 12:44:42 np0005533938 systemd[1]: Starting Hostname Service...
Nov 24 12:44:42 np0005533938 systemd[1]: Started Hostname Service.
Nov 24 12:44:42 np0005533938 systemd-hostnamed[856]: Hostname set to <np0005533938.novalocal> (static)
Nov 24 12:44:42 np0005533938 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 24 12:44:42 np0005533938 systemd[1]: Reached target Preparation for Network.
Nov 24 12:44:42 np0005533938 systemd[1]: Starting Network Manager...
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.7381] NetworkManager (version 1.54.1-1.el9) is starting... (boot:c726fd3c-29d8-43c4-9498-0fb31e19789a)
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.7387] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.8583] manager[0x55cce271d080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.8653] hostname: hostname: using hostnamed
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.8654] hostname: static hostname changed from (none) to "np0005533938.novalocal"
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.8662] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.8831] manager[0x55cce271d080]: rfkill: Wi-Fi hardware radio set enabled
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.8832] manager[0x55cce271d080]: rfkill: WWAN hardware radio set enabled
Nov 24 12:44:42 np0005533938 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9087] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9089] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9090] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9090] manager: Networking is enabled by state file
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9093] settings: Loaded settings plugin: keyfile (internal)
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9266] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9355] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9387] dhcp: init: Using DHCP client 'internal'
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9390] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9404] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 12:44:42 np0005533938 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9530] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9538] device (lo): Activation: starting connection 'lo' (5922deac-6043-4983-8df6-40dbc8abd7af)
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9547] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9550] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 12:44:42 np0005533938 systemd[1]: Started Network Manager.
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9585] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9589] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9591] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9592] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9594] device (eth0): carrier: link connected
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9595] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9601] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 24 12:44:42 np0005533938 systemd[1]: Reached target Network.
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9632] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9636] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9637] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9639] manager: NetworkManager state is now CONNECTING
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9640] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 12:44:42 np0005533938 systemd[1]: Starting Network Manager Wait Online...
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9647] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9650] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 12:44:42 np0005533938 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9707] dhcp4 (eth0): state changed new lease, address=38.102.83.27
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9716] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 24 12:44:42 np0005533938 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9736] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9925] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9927] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9934] device (lo): Activation: successful, device activated.
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9942] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9943] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9945] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9948] device (eth0): Activation: successful, device activated.
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9953] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 24 12:44:42 np0005533938 NetworkManager[860]: <info>  [1764006282.9956] manager: startup complete
Nov 24 12:44:43 np0005533938 systemd[1]: Finished Network Manager Wait Online.
Nov 24 12:44:43 np0005533938 systemd[1]: Starting Cloud-init: Network Stage...
Nov 24 12:44:43 np0005533938 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 24 12:44:43 np0005533938 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 24 12:44:43 np0005533938 systemd[1]: Reached target NFS client services.
Nov 24 12:44:43 np0005533938 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 24 12:44:43 np0005533938 systemd[1]: Reached target Remote File Systems.
Nov 24 12:44:43 np0005533938 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 24 12:44:43 np0005533938 cloud-init[925]: Cloud-init v. 24.4-7.el9 running 'init' at Mon, 24 Nov 2025 17:44:43 +0000. Up 15.02 seconds.
Nov 24 12:44:43 np0005533938 cloud-init[925]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 24 12:44:43 np0005533938 cloud-init[925]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 24 12:44:43 np0005533938 cloud-init[925]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 24 12:44:43 np0005533938 cloud-init[925]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 24 12:44:43 np0005533938 cloud-init[925]: ci-info: |  eth0  | True |         38.102.83.27         | 255.255.255.0 | global | fa:16:3e:11:88:c1 |
Nov 24 12:44:43 np0005533938 cloud-init[925]: ci-info: |  eth0  | True | fe80::f816:3eff:fe11:88c1/64 |       .       |  link  | fa:16:3e:11:88:c1 |
Nov 24 12:44:43 np0005533938 cloud-init[925]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 24 12:44:43 np0005533938 cloud-init[925]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 24 12:44:43 np0005533938 cloud-init[925]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 24 12:44:43 np0005533938 cloud-init[925]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 24 12:44:43 np0005533938 cloud-init[925]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 24 12:44:43 np0005533938 cloud-init[925]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 24 12:44:43 np0005533938 cloud-init[925]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 24 12:44:43 np0005533938 cloud-init[925]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 24 12:44:43 np0005533938 cloud-init[925]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 24 12:44:43 np0005533938 cloud-init[925]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 24 12:44:43 np0005533938 cloud-init[925]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 24 12:44:43 np0005533938 cloud-init[925]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 24 12:44:43 np0005533938 cloud-init[925]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 24 12:44:43 np0005533938 cloud-init[925]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 24 12:44:43 np0005533938 cloud-init[925]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 24 12:44:43 np0005533938 cloud-init[925]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 24 12:44:43 np0005533938 cloud-init[925]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 24 12:44:43 np0005533938 cloud-init[925]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 24 12:44:45 np0005533938 cloud-init[925]: Generating public/private rsa key pair.
Nov 24 12:44:45 np0005533938 cloud-init[925]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 24 12:44:45 np0005533938 cloud-init[925]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 24 12:44:45 np0005533938 cloud-init[925]: The key fingerprint is:
Nov 24 12:44:45 np0005533938 cloud-init[925]: SHA256:hcDAAuEIqGKAH+TbqBy+2fA7fJpWdtXnlt54/h+BmsY root@np0005533938.novalocal
Nov 24 12:44:45 np0005533938 cloud-init[925]: The key's randomart image is:
Nov 24 12:44:45 np0005533938 cloud-init[925]: +---[RSA 3072]----+
Nov 24 12:44:45 np0005533938 cloud-init[925]: |*+...o.          |
Nov 24 12:44:45 np0005533938 cloud-init[925]: |B.o . .. .       |
Nov 24 12:44:45 np0005533938 cloud-init[925]: |+o.o    . o      |
Nov 24 12:44:45 np0005533938 cloud-init[925]: |o..+     o . ..  |
Nov 24 12:44:45 np0005533938 cloud-init[925]: |o.o .   S   o... |
Nov 24 12:44:45 np0005533938 cloud-init[925]: |o..  o .  . o+  .|
Nov 24 12:44:45 np0005533938 cloud-init[925]: |.+. o .    Eo o. |
Nov 24 12:44:45 np0005533938 cloud-init[925]: |  *+..    .  o o.|
Nov 24 12:44:45 np0005533938 cloud-init[925]: | o.*=         o.=|
Nov 24 12:44:45 np0005533938 cloud-init[925]: +----[SHA256]-----+
Nov 24 12:44:45 np0005533938 cloud-init[925]: Generating public/private ecdsa key pair.
Nov 24 12:44:45 np0005533938 cloud-init[925]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 24 12:44:45 np0005533938 cloud-init[925]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 24 12:44:45 np0005533938 cloud-init[925]: The key fingerprint is:
Nov 24 12:44:45 np0005533938 cloud-init[925]: SHA256:CDacZeZsUsI2oBwTd1KvTiEBFaeFP4vCkB2znje57Jg root@np0005533938.novalocal
Nov 24 12:44:45 np0005533938 cloud-init[925]: The key's randomart image is:
Nov 24 12:44:45 np0005533938 cloud-init[925]: +---[ECDSA 256]---+
Nov 24 12:44:45 np0005533938 cloud-init[925]: | ==O===          |
Nov 24 12:44:45 np0005533938 cloud-init[925]: |..*o@X.          |
Nov 24 12:44:45 np0005533938 cloud-init[925]: |.+ BOo+.         |
Nov 24 12:44:45 np0005533938 cloud-init[925]: |o o..Bo.         |
Nov 24 12:44:45 np0005533938 cloud-init[925]: |o. ..++ S        |
Nov 24 12:44:45 np0005533938 cloud-init[925]: | oo.*.           |
Nov 24 12:44:45 np0005533938 cloud-init[925]: |  .o +           |
Nov 24 12:44:45 np0005533938 cloud-init[925]: |   oo            |
Nov 24 12:44:45 np0005533938 cloud-init[925]: |  E..            |
Nov 24 12:44:45 np0005533938 cloud-init[925]: +----[SHA256]-----+
Nov 24 12:44:45 np0005533938 cloud-init[925]: Generating public/private ed25519 key pair.
Nov 24 12:44:45 np0005533938 cloud-init[925]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 24 12:44:45 np0005533938 cloud-init[925]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 24 12:44:45 np0005533938 cloud-init[925]: The key fingerprint is:
Nov 24 12:44:45 np0005533938 cloud-init[925]: SHA256:NDcN/s6nHgDI7xNQdcH0y1ZOLCOYBiEgE81oGTY+ch8 root@np0005533938.novalocal
Nov 24 12:44:45 np0005533938 cloud-init[925]: The key's randomart image is:
Nov 24 12:44:45 np0005533938 cloud-init[925]: +--[ED25519 256]--+
Nov 24 12:44:45 np0005533938 cloud-init[925]: |  BB... +ooo+.   |
Nov 24 12:44:45 np0005533938 cloud-init[925]: | o++o. + o *.. . |
Nov 24 12:44:45 np0005533938 cloud-init[925]: |..+ E + + B o + +|
Nov 24 12:44:45 np0005533938 cloud-init[925]: | o o . + = o o B |
Nov 24 12:44:45 np0005533938 cloud-init[925]: |    .   S . . + .|
Nov 24 12:44:45 np0005533938 cloud-init[925]: |       . . + .   |
Nov 24 12:44:45 np0005533938 cloud-init[925]: |        o   + .  |
Nov 24 12:44:45 np0005533938 cloud-init[925]: |         .   +   |
Nov 24 12:44:45 np0005533938 cloud-init[925]: |           .o    |
Nov 24 12:44:45 np0005533938 cloud-init[925]: +----[SHA256]-----+
Nov 24 12:44:45 np0005533938 systemd[1]: Finished Cloud-init: Network Stage.
Nov 24 12:44:45 np0005533938 systemd[1]: Reached target Cloud-config availability.
Nov 24 12:44:45 np0005533938 systemd[1]: Reached target Network is Online.
Nov 24 12:44:45 np0005533938 systemd[1]: Starting Cloud-init: Config Stage...
Nov 24 12:44:45 np0005533938 systemd[1]: Starting Crash recovery kernel arming...
Nov 24 12:44:45 np0005533938 systemd[1]: Starting Notify NFS peers of a restart...
Nov 24 12:44:45 np0005533938 systemd[1]: Starting System Logging Service...
Nov 24 12:44:45 np0005533938 systemd[1]: Starting OpenSSH server daemon...
Nov 24 12:44:45 np0005533938 sm-notify[1007]: Version 2.5.4 starting
Nov 24 12:44:45 np0005533938 systemd[1]: Starting Permit User Sessions...
Nov 24 12:44:45 np0005533938 systemd[1]: Started Notify NFS peers of a restart.
Nov 24 12:44:45 np0005533938 systemd[1]: Started OpenSSH server daemon.
Nov 24 12:44:45 np0005533938 systemd[1]: Finished Permit User Sessions.
Nov 24 12:44:45 np0005533938 systemd[1]: Started Command Scheduler.
Nov 24 12:44:45 np0005533938 systemd[1]: Started Getty on tty1.
Nov 24 12:44:45 np0005533938 systemd[1]: Started Serial Getty on ttyS0.
Nov 24 12:44:45 np0005533938 systemd[1]: Reached target Login Prompts.
Nov 24 12:44:45 np0005533938 rsyslogd[1008]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1008" x-info="https://www.rsyslog.com"] start
Nov 24 12:44:45 np0005533938 systemd[1]: Started System Logging Service.
Nov 24 12:44:45 np0005533938 rsyslogd[1008]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 24 12:44:45 np0005533938 systemd[1]: Reached target Multi-User System.
Nov 24 12:44:45 np0005533938 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 24 12:44:45 np0005533938 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 24 12:44:45 np0005533938 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 24 12:44:45 np0005533938 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 12:44:45 np0005533938 kdumpctl[1021]: kdump: No kdump initial ramdisk found.
Nov 24 12:44:45 np0005533938 kdumpctl[1021]: kdump: Rebuilding /boot/initramfs-5.14.0-639.el9.x86_64kdump.img
Nov 24 12:44:45 np0005533938 cloud-init[1135]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Mon, 24 Nov 2025 17:44:45 +0000. Up 17.57 seconds.
Nov 24 12:44:46 np0005533938 systemd[1]: Finished Cloud-init: Config Stage.
Nov 24 12:44:46 np0005533938 systemd[1]: Starting Cloud-init: Final Stage...
Nov 24 12:44:46 np0005533938 dracut[1285]: dracut-057-102.git20250818.el9
Nov 24 12:44:46 np0005533938 cloud-init[1306]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Mon, 24 Nov 2025 17:44:46 +0000. Up 18.02 seconds.
Nov 24 12:44:46 np0005533938 dracut[1288]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-639.el9.x86_64kdump.img 5.14.0-639.el9.x86_64
Nov 24 12:44:46 np0005533938 cloud-init[1336]: #############################################################
Nov 24 12:44:46 np0005533938 cloud-init[1339]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 24 12:44:46 np0005533938 cloud-init[1347]: 256 SHA256:CDacZeZsUsI2oBwTd1KvTiEBFaeFP4vCkB2znje57Jg root@np0005533938.novalocal (ECDSA)
Nov 24 12:44:46 np0005533938 cloud-init[1356]: 256 SHA256:NDcN/s6nHgDI7xNQdcH0y1ZOLCOYBiEgE81oGTY+ch8 root@np0005533938.novalocal (ED25519)
Nov 24 12:44:46 np0005533938 cloud-init[1363]: 3072 SHA256:hcDAAuEIqGKAH+TbqBy+2fA7fJpWdtXnlt54/h+BmsY root@np0005533938.novalocal (RSA)
Nov 24 12:44:46 np0005533938 cloud-init[1365]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 24 12:44:46 np0005533938 cloud-init[1367]: #############################################################
Nov 24 12:44:46 np0005533938 cloud-init[1306]: Cloud-init v. 24.4-7.el9 finished at Mon, 24 Nov 2025 17:44:46 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 18.25 seconds
Nov 24 12:44:46 np0005533938 systemd[1]: Finished Cloud-init: Final Stage.
Nov 24 12:44:46 np0005533938 systemd[1]: Reached target Cloud-init target.
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 24 12:44:47 np0005533938 chronyd[831]: Selected source 162.159.200.123 (2.centos.pool.ntp.org)
Nov 24 12:44:47 np0005533938 chronyd[831]: System clock TAI offset set to 37 seconds
Nov 24 12:44:47 np0005533938 dracut[1288]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: memstrack is not available
Nov 24 12:44:48 np0005533938 dracut[1288]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 24 12:44:48 np0005533938 dracut[1288]: memstrack is not available
Nov 24 12:44:48 np0005533938 dracut[1288]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 24 12:44:48 np0005533938 dracut[1288]: *** Including module: systemd ***
Nov 24 12:44:49 np0005533938 dracut[1288]: *** Including module: fips ***
Nov 24 12:44:49 np0005533938 irqbalance[818]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 24 12:44:49 np0005533938 irqbalance[818]: IRQ 25 affinity is now unmanaged
Nov 24 12:44:49 np0005533938 irqbalance[818]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 24 12:44:49 np0005533938 irqbalance[818]: IRQ 31 affinity is now unmanaged
Nov 24 12:44:49 np0005533938 irqbalance[818]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 24 12:44:49 np0005533938 irqbalance[818]: IRQ 28 affinity is now unmanaged
Nov 24 12:44:49 np0005533938 irqbalance[818]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 24 12:44:49 np0005533938 irqbalance[818]: IRQ 32 affinity is now unmanaged
Nov 24 12:44:49 np0005533938 irqbalance[818]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 24 12:44:49 np0005533938 irqbalance[818]: IRQ 30 affinity is now unmanaged
Nov 24 12:44:49 np0005533938 irqbalance[818]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 24 12:44:49 np0005533938 irqbalance[818]: IRQ 29 affinity is now unmanaged
Nov 24 12:44:49 np0005533938 dracut[1288]: *** Including module: systemd-initrd ***
Nov 24 12:44:49 np0005533938 dracut[1288]: *** Including module: i18n ***
Nov 24 12:44:49 np0005533938 dracut[1288]: *** Including module: drm ***
Nov 24 12:44:49 np0005533938 dracut[1288]: *** Including module: prefixdevname ***
Nov 24 12:44:49 np0005533938 dracut[1288]: *** Including module: kernel-modules ***
Nov 24 12:44:50 np0005533938 kernel: block vda: the capability attribute has been deprecated.
Nov 24 12:44:50 np0005533938 dracut[1288]: *** Including module: kernel-modules-extra ***
Nov 24 12:44:50 np0005533938 dracut[1288]: *** Including module: qemu ***
Nov 24 12:44:50 np0005533938 dracut[1288]: *** Including module: fstab-sys ***
Nov 24 12:44:50 np0005533938 dracut[1288]: *** Including module: rootfs-block ***
Nov 24 12:44:50 np0005533938 dracut[1288]: *** Including module: terminfo ***
Nov 24 12:44:50 np0005533938 dracut[1288]: *** Including module: udev-rules ***
Nov 24 12:44:51 np0005533938 dracut[1288]: Skipping udev rule: 91-permissions.rules
Nov 24 12:44:51 np0005533938 dracut[1288]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 24 12:44:51 np0005533938 dracut[1288]: *** Including module: virtiofs ***
Nov 24 12:44:51 np0005533938 dracut[1288]: *** Including module: dracut-systemd ***
Nov 24 12:44:51 np0005533938 dracut[1288]: *** Including module: usrmount ***
Nov 24 12:44:51 np0005533938 dracut[1288]: *** Including module: base ***
Nov 24 12:44:51 np0005533938 dracut[1288]: *** Including module: fs-lib ***
Nov 24 12:44:51 np0005533938 dracut[1288]: *** Including module: kdumpbase ***
Nov 24 12:44:52 np0005533938 dracut[1288]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 24 12:44:52 np0005533938 dracut[1288]:  microcode_ctl module: mangling fw_dir
Nov 24 12:44:52 np0005533938 dracut[1288]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 24 12:44:52 np0005533938 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 24 12:44:52 np0005533938 dracut[1288]:    microcode_ctl: configuration "intel" is ignored
Nov 24 12:44:52 np0005533938 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 24 12:44:52 np0005533938 dracut[1288]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 24 12:44:52 np0005533938 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 24 12:44:52 np0005533938 dracut[1288]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 24 12:44:52 np0005533938 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 24 12:44:52 np0005533938 dracut[1288]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 24 12:44:52 np0005533938 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 24 12:44:52 np0005533938 dracut[1288]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 24 12:44:52 np0005533938 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 24 12:44:52 np0005533938 dracut[1288]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 24 12:44:52 np0005533938 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 24 12:44:52 np0005533938 dracut[1288]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 24 12:44:52 np0005533938 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 24 12:44:52 np0005533938 dracut[1288]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 24 12:44:52 np0005533938 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 24 12:44:52 np0005533938 dracut[1288]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 24 12:44:52 np0005533938 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 24 12:44:52 np0005533938 dracut[1288]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 24 12:44:52 np0005533938 dracut[1288]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 24 12:44:52 np0005533938 dracut[1288]: *** Including module: openssl ***
Nov 24 12:44:52 np0005533938 dracut[1288]: *** Including module: shutdown ***
Nov 24 12:44:52 np0005533938 dracut[1288]: *** Including module: squash ***
Nov 24 12:44:53 np0005533938 dracut[1288]: *** Including modules done ***
Nov 24 12:44:53 np0005533938 dracut[1288]: *** Installing kernel module dependencies ***
Nov 24 12:44:53 np0005533938 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 12:44:53 np0005533938 dracut[1288]: *** Installing kernel module dependencies done ***
Nov 24 12:44:54 np0005533938 dracut[1288]: *** Resolving executable dependencies ***
Nov 24 12:44:56 np0005533938 dracut[1288]: *** Resolving executable dependencies done ***
Nov 24 12:44:56 np0005533938 dracut[1288]: *** Generating early-microcode cpio image ***
Nov 24 12:44:56 np0005533938 dracut[1288]: *** Store current command line parameters ***
Nov 24 12:44:56 np0005533938 dracut[1288]: Stored kernel commandline:
Nov 24 12:44:56 np0005533938 dracut[1288]: No dracut internal kernel commandline stored in the initramfs
Nov 24 12:44:56 np0005533938 dracut[1288]: *** Install squash loader ***
Nov 24 12:44:57 np0005533938 dracut[1288]: *** Squashing the files inside the initramfs ***
Nov 24 12:44:58 np0005533938 dracut[1288]: *** Squashing the files inside the initramfs done ***
Nov 24 12:44:58 np0005533938 dracut[1288]: *** Creating image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' ***
Nov 24 12:44:58 np0005533938 dracut[1288]: *** Hardlinking files ***
Nov 24 12:44:58 np0005533938 dracut[1288]: *** Hardlinking files done ***
Nov 24 12:44:59 np0005533938 dracut[1288]: *** Creating initramfs image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' done ***
Nov 24 12:44:59 np0005533938 kdumpctl[1021]: kdump: kexec: loaded kdump kernel
Nov 24 12:44:59 np0005533938 kdumpctl[1021]: kdump: Starting kdump: [OK]
Nov 24 12:44:59 np0005533938 systemd[1]: Finished Crash recovery kernel arming.
Nov 24 12:44:59 np0005533938 systemd[1]: Startup finished in 1.587s (kernel) + 3.303s (initrd) + 26.491s (userspace) = 31.382s.
Nov 24 12:45:01 np0005533938 systemd[1]: Created slice User Slice of UID 1000.
Nov 24 12:45:01 np0005533938 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 24 12:45:01 np0005533938 systemd-logind[822]: New session 1 of user zuul.
Nov 24 12:45:01 np0005533938 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 24 12:45:01 np0005533938 systemd[1]: Starting User Manager for UID 1000...
Nov 24 12:45:01 np0005533938 systemd[4302]: Queued start job for default target Main User Target.
Nov 24 12:45:01 np0005533938 systemd[4302]: Created slice User Application Slice.
Nov 24 12:45:01 np0005533938 systemd[4302]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 24 12:45:01 np0005533938 systemd[4302]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 12:45:01 np0005533938 systemd[4302]: Reached target Paths.
Nov 24 12:45:01 np0005533938 systemd[4302]: Reached target Timers.
Nov 24 12:45:01 np0005533938 systemd[4302]: Starting D-Bus User Message Bus Socket...
Nov 24 12:45:01 np0005533938 systemd[4302]: Starting Create User's Volatile Files and Directories...
Nov 24 12:45:01 np0005533938 systemd[4302]: Finished Create User's Volatile Files and Directories.
Nov 24 12:45:01 np0005533938 systemd[4302]: Listening on D-Bus User Message Bus Socket.
Nov 24 12:45:01 np0005533938 systemd[4302]: Reached target Sockets.
Nov 24 12:45:01 np0005533938 systemd[4302]: Reached target Basic System.
Nov 24 12:45:01 np0005533938 systemd[4302]: Reached target Main User Target.
Nov 24 12:45:01 np0005533938 systemd[4302]: Startup finished in 141ms.
Nov 24 12:45:01 np0005533938 systemd[1]: Started User Manager for UID 1000.
Nov 24 12:45:01 np0005533938 systemd[1]: Started Session 1 of User zuul.
Nov 24 12:45:01 np0005533938 python3[4385]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 12:45:06 np0005533938 python3[4413]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 12:45:12 np0005533938 python3[4471]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 12:45:12 np0005533938 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 12:45:13 np0005533938 python3[4513]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 24 12:45:15 np0005533938 python3[4539]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvZPrKgB89mfwS2oik8tzBHyaRlPXyTumbN2XjYTIM9I73V/FHQIy+XgadtbvJmQdYv8gh5HHJ/ClxLAoQ9aQF+mvRKNNs1jSgMJUqsMhPN6puT4ggC46WGm2cz7KmzKpsB0ShzjCEx+MnmeM3wyA9Qhj49wWd31woFFaZ0yOVerGO1NVQlk/OPG/73EZkgrw/yGDomLqV0TCVSy3AhPNg5NtRbQiteODSSbZVl1auSX9PwM/eoz9P0tZMrIFOrXEd1QpVvERhc48M4e8edGTP8GQI4cSCyvKKG53gcEcBzpMbfnQtx4DKICDQxx6CHUC08XioN/xg1GDke+lh7jFrHL37m3oI2k55is36NYx0S3pSY+f6DLn6SiNGX8TaDALHvruYJmRuLFKa/olWFbLiJzfBaW9cpTWGHEkDqqpm7EbWP7Dy8VYQf2ziK+vtM8QLvT7ulXgdFRF9k5sR0YY7NIMeo/48c+v/ONPoP9lLYshkXYLCbff8PwGxgkN39aM= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:16 np0005533938 python3[4563]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:45:18 np0005533938 python3[4662]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 12:45:18 np0005533938 python3[4733]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764006317.931667-207-236949643964642/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=ff74cb70154b44fbadb80a19812dfd3c_id_rsa follow=False checksum=b939cc582dbc8d0d1a8ac7a9137f32beb4d349b2 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:45:19 np0005533938 python3[4856]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 12:45:19 np0005533938 python3[4927]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764006318.8711755-240-33848187972576/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=ff74cb70154b44fbadb80a19812dfd3c_id_rsa.pub follow=False checksum=fbea10b760c4c20f6233311d514933ea718dc471 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:45:20 np0005533938 python3[4975]: ansible-ping Invoked with data=pong
Nov 24 12:45:21 np0005533938 python3[4999]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 12:45:23 np0005533938 python3[5057]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 24 12:45:24 np0005533938 python3[5089]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:45:24 np0005533938 python3[5113]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:45:24 np0005533938 python3[5137]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:45:25 np0005533938 python3[5161]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:45:25 np0005533938 python3[5185]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:45:25 np0005533938 python3[5209]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:45:27 np0005533938 python3[5235]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:45:27 np0005533938 python3[5313]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 12:45:28 np0005533938 python3[5386]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764006327.4731293-21-167780810695481/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:45:28 np0005533938 python3[5434]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:29 np0005533938 python3[5458]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:29 np0005533938 python3[5482]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:29 np0005533938 python3[5506]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:30 np0005533938 python3[5530]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:30 np0005533938 python3[5554]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:30 np0005533938 python3[5578]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:31 np0005533938 python3[5602]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:31 np0005533938 python3[5626]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:31 np0005533938 python3[5650]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:32 np0005533938 python3[5674]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:32 np0005533938 python3[5698]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:32 np0005533938 python3[5722]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:33 np0005533938 python3[5746]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:33 np0005533938 python3[5770]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:33 np0005533938 python3[5794]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:34 np0005533938 python3[5818]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:34 np0005533938 python3[5842]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:34 np0005533938 python3[5866]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:34 np0005533938 python3[5890]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:35 np0005533938 python3[5914]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:35 np0005533938 python3[5938]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:35 np0005533938 python3[5962]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:36 np0005533938 python3[5986]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:36 np0005533938 python3[6010]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:36 np0005533938 python3[6034]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:45:39 np0005533938 python3[6062]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 24 12:45:39 np0005533938 systemd[1]: Starting Time & Date Service...
Nov 24 12:45:39 np0005533938 systemd[1]: Started Time & Date Service.
Nov 24 12:45:39 np0005533938 systemd-timedated[6064]: Changed time zone to 'UTC' (UTC).
Nov 24 12:45:40 np0005533938 python3[6093]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:45:40 np0005533938 python3[6169]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 12:45:41 np0005533938 python3[6240]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764006340.6357918-153-1549162852287/source _original_basename=tmptgtfilkf follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:45:41 np0005533938 python3[6340]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 12:45:42 np0005533938 python3[6411]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764006341.627796-183-102437993283363/source _original_basename=tmp417pb5j9 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:45:43 np0005533938 python3[6513]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 12:45:43 np0005533938 python3[6586]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764006342.7206419-231-48848782624452/source _original_basename=tmpk1y9k6v4 follow=False checksum=e37e58be433a53918a64d1ef12dfc1e7d01516d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:45:44 np0005533938 python3[6634]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 12:45:44 np0005533938 python3[6660]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 12:45:44 np0005533938 python3[6740]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 12:45:45 np0005533938 python3[6813]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764006344.46004-273-5557387571913/source _original_basename=tmppopzm06w follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:45:45 np0005533938 python3[6864]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-6218-5d16-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 12:45:46 np0005533938 python3[6892]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-6218-5d16-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 24 12:45:47 np0005533938 python3[6920]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:46:04 np0005533938 python3[6946]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:46:09 np0005533938 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 24 12:46:39 np0005533938 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 24 12:46:39 np0005533938 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 24 12:46:39 np0005533938 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 24 12:46:39 np0005533938 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 24 12:46:39 np0005533938 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 24 12:46:39 np0005533938 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 24 12:46:39 np0005533938 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 24 12:46:39 np0005533938 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 24 12:46:39 np0005533938 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 24 12:46:39 np0005533938 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 24 12:46:39 np0005533938 NetworkManager[860]: <info>  [1764006399.5545] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 24 12:46:39 np0005533938 systemd-udevd[6950]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 12:46:39 np0005533938 NetworkManager[860]: <info>  [1764006399.5760] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 12:46:39 np0005533938 NetworkManager[860]: <info>  [1764006399.5811] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 24 12:46:39 np0005533938 NetworkManager[860]: <info>  [1764006399.5818] device (eth1): carrier: link connected
Nov 24 12:46:39 np0005533938 NetworkManager[860]: <info>  [1764006399.5822] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 24 12:46:39 np0005533938 NetworkManager[860]: <info>  [1764006399.5834] policy: auto-activating connection 'Wired connection 1' (3cf5caf6-dae0-3e12-91e8-cbb71d516e93)
Nov 24 12:46:39 np0005533938 NetworkManager[860]: <info>  [1764006399.5841] device (eth1): Activation: starting connection 'Wired connection 1' (3cf5caf6-dae0-3e12-91e8-cbb71d516e93)
Nov 24 12:46:39 np0005533938 NetworkManager[860]: <info>  [1764006399.5843] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 12:46:39 np0005533938 NetworkManager[860]: <info>  [1764006399.5847] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 12:46:39 np0005533938 NetworkManager[860]: <info>  [1764006399.5853] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 12:46:39 np0005533938 NetworkManager[860]: <info>  [1764006399.5861] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 24 12:46:40 np0005533938 python3[6976]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-80b4-97a3-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 12:46:50 np0005533938 python3[7056]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 12:46:50 np0005533938 python3[7129]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764006410.1853683-102-155853294885160/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=d4c7d2f5197cc551e7b426198f8b8e1bde6a08c2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:46:51 np0005533938 python3[7179]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 12:46:51 np0005533938 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 24 12:46:51 np0005533938 systemd[1]: Stopped Network Manager Wait Online.
Nov 24 12:46:51 np0005533938 systemd[1]: Stopping Network Manager Wait Online...
Nov 24 12:46:51 np0005533938 systemd[1]: Stopping Network Manager...
Nov 24 12:46:51 np0005533938 NetworkManager[860]: <info>  [1764006411.9467] caught SIGTERM, shutting down normally.
Nov 24 12:46:51 np0005533938 NetworkManager[860]: <info>  [1764006411.9482] dhcp4 (eth0): canceled DHCP transaction
Nov 24 12:46:51 np0005533938 NetworkManager[860]: <info>  [1764006411.9482] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 12:46:51 np0005533938 NetworkManager[860]: <info>  [1764006411.9482] dhcp4 (eth0): state changed no lease
Nov 24 12:46:51 np0005533938 NetworkManager[860]: <info>  [1764006411.9487] manager: NetworkManager state is now CONNECTING
Nov 24 12:46:51 np0005533938 NetworkManager[860]: <info>  [1764006411.9680] dhcp4 (eth1): canceled DHCP transaction
Nov 24 12:46:51 np0005533938 NetworkManager[860]: <info>  [1764006411.9681] dhcp4 (eth1): state changed no lease
Nov 24 12:46:51 np0005533938 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 12:46:51 np0005533938 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 12:46:52 np0005533938 NetworkManager[860]: <info>  [1764006412.4357] exiting (success)
Nov 24 12:46:52 np0005533938 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 24 12:46:52 np0005533938 systemd[1]: Stopped Network Manager.
Nov 24 12:46:52 np0005533938 systemd[1]: Starting Network Manager...
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.4940] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:c726fd3c-29d8-43c4-9498-0fb31e19789a)
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.4941] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5007] manager[0x561fcfa46070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 24 12:46:52 np0005533938 systemd[1]: Starting Hostname Service...
Nov 24 12:46:52 np0005533938 systemd[1]: Started Hostname Service.
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5727] hostname: hostname: using hostnamed
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5727] hostname: static hostname changed from (none) to "np0005533938.novalocal"
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5734] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5742] manager[0x561fcfa46070]: rfkill: Wi-Fi hardware radio set enabled
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5742] manager[0x561fcfa46070]: rfkill: WWAN hardware radio set enabled
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5779] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5779] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5780] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5780] manager: Networking is enabled by state file
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5783] settings: Loaded settings plugin: keyfile (internal)
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5788] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5818] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5832] dhcp: init: Using DHCP client 'internal'
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5835] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5842] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5849] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5859] device (lo): Activation: starting connection 'lo' (5922deac-6043-4983-8df6-40dbc8abd7af)
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5866] device (eth0): carrier: link connected
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5871] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5876] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5876] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5883] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5890] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5898] device (eth1): carrier: link connected
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5902] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5907] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (3cf5caf6-dae0-3e12-91e8-cbb71d516e93) (indicated)
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5908] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5913] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5921] device (eth1): Activation: starting connection 'Wired connection 1' (3cf5caf6-dae0-3e12-91e8-cbb71d516e93)
Nov 24 12:46:52 np0005533938 systemd[1]: Started Network Manager.
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5940] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5949] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5956] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5960] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5965] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5972] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5977] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5982] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.5991] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.6003] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.6009] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.6025] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.6030] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.6057] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.6063] dhcp4 (eth0): state changed new lease, address=38.102.83.27
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.6072] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.6083] device (lo): Activation: successful, device activated.
Nov 24 12:46:52 np0005533938 NetworkManager[7196]: <info>  [1764006412.6104] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 24 12:46:52 np0005533938 systemd[1]: Starting Network Manager Wait Online...
Nov 24 12:46:53 np0005533938 python3[7245]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-80b4-97a3-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 12:46:53 np0005533938 NetworkManager[7196]: <info>  [1764006413.1094] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 12:46:53 np0005533938 NetworkManager[7196]: <info>  [1764006413.1365] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 12:46:53 np0005533938 NetworkManager[7196]: <info>  [1764006413.1369] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 12:46:53 np0005533938 NetworkManager[7196]: <info>  [1764006413.1375] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 12:46:53 np0005533938 NetworkManager[7196]: <info>  [1764006413.1381] device (eth0): Activation: successful, device activated.
Nov 24 12:46:53 np0005533938 NetworkManager[7196]: <info>  [1764006413.1391] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 24 12:47:03 np0005533938 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 12:47:22 np0005533938 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 12:47:38 np0005533938 NetworkManager[7196]: <info>  [1764006458.3121] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 12:47:38 np0005533938 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 12:47:38 np0005533938 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 12:47:38 np0005533938 NetworkManager[7196]: <info>  [1764006458.3413] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 12:47:38 np0005533938 NetworkManager[7196]: <info>  [1764006458.3415] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 12:47:38 np0005533938 NetworkManager[7196]: <info>  [1764006458.3421] device (eth1): Activation: successful, device activated.
Nov 24 12:47:38 np0005533938 NetworkManager[7196]: <info>  [1764006458.3426] manager: startup complete
Nov 24 12:47:38 np0005533938 NetworkManager[7196]: <info>  [1764006458.3428] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 24 12:47:38 np0005533938 NetworkManager[7196]: <warn>  [1764006458.3433] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 24 12:47:38 np0005533938 NetworkManager[7196]: <info>  [1764006458.3441] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 24 12:47:38 np0005533938 systemd[1]: Finished Network Manager Wait Online.
Nov 24 12:47:38 np0005533938 NetworkManager[7196]: <info>  [1764006458.3650] dhcp4 (eth1): canceled DHCP transaction
Nov 24 12:47:38 np0005533938 NetworkManager[7196]: <info>  [1764006458.3651] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 24 12:47:38 np0005533938 NetworkManager[7196]: <info>  [1764006458.3651] dhcp4 (eth1): state changed no lease
Nov 24 12:47:38 np0005533938 NetworkManager[7196]: <info>  [1764006458.3666] policy: auto-activating connection 'ci-private-network' (730e1bbf-c4c7-52c0-85e9-2379c2b50bf6)
Nov 24 12:47:38 np0005533938 NetworkManager[7196]: <info>  [1764006458.3670] device (eth1): Activation: starting connection 'ci-private-network' (730e1bbf-c4c7-52c0-85e9-2379c2b50bf6)
Nov 24 12:47:38 np0005533938 NetworkManager[7196]: <info>  [1764006458.3671] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 12:47:38 np0005533938 NetworkManager[7196]: <info>  [1764006458.3673] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 12:47:38 np0005533938 NetworkManager[7196]: <info>  [1764006458.3679] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 12:47:38 np0005533938 NetworkManager[7196]: <info>  [1764006458.3686] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 12:47:38 np0005533938 NetworkManager[7196]: <info>  [1764006458.5140] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 12:47:38 np0005533938 NetworkManager[7196]: <info>  [1764006458.5142] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 12:47:38 np0005533938 NetworkManager[7196]: <info>  [1764006458.5149] device (eth1): Activation: successful, device activated.
Nov 24 12:47:41 np0005533938 systemd[4302]: Starting Mark boot as successful...
Nov 24 12:47:41 np0005533938 systemd[4302]: Finished Mark boot as successful.
Nov 24 12:47:48 np0005533938 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 12:47:49 np0005533938 python3[7370]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 12:47:50 np0005533938 python3[7443]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764006469.5237026-267-167324167131161/source _original_basename=tmpa77i20hx follow=False checksum=c553385c2e3b212f0e2dcf8c6aad3b5b766c5901 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:48:50 np0005533938 systemd-logind[822]: Session 1 logged out. Waiting for processes to exit.
Nov 24 12:50:41 np0005533938 systemd[4302]: Created slice User Background Tasks Slice.
Nov 24 12:50:41 np0005533938 systemd[4302]: Starting Cleanup of User's Temporary Files and Directories...
Nov 24 12:50:41 np0005533938 systemd[4302]: Finished Cleanup of User's Temporary Files and Directories.
Nov 24 12:53:49 np0005533938 systemd-logind[822]: New session 3 of user zuul.
Nov 24 12:53:49 np0005533938 systemd[1]: Started Session 3 of User zuul.
Nov 24 12:53:49 np0005533938 python3[7500]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-5362-b8bb-000000001cc8-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 12:53:50 np0005533938 python3[7529]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:53:50 np0005533938 python3[7555]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:53:50 np0005533938 python3[7581]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:53:51 np0005533938 python3[7607]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:53:51 np0005533938 python3[7633]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:53:52 np0005533938 python3[7711]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 12:53:52 np0005533938 python3[7784]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764006831.803891-479-261587616630473/source _original_basename=tmpyjw3cgyw follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:53:53 np0005533938 python3[7834]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 12:53:53 np0005533938 systemd[1]: Reloading.
Nov 24 12:53:53 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 12:53:55 np0005533938 python3[7891]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 24 12:53:55 np0005533938 python3[7917]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 12:53:55 np0005533938 python3[7945]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 12:53:56 np0005533938 python3[7973]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 12:53:56 np0005533938 python3[8001]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 12:53:56 np0005533938 python3[8028]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-5362-b8bb-000000001ccf-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 12:53:57 np0005533938 python3[8058]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 12:53:59 np0005533938 irqbalance[818]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 24 12:53:59 np0005533938 irqbalance[818]: IRQ 26 affinity is now unmanaged
Nov 24 12:53:59 np0005533938 systemd[1]: session-3.scope: Deactivated successfully.
Nov 24 12:53:59 np0005533938 systemd[1]: session-3.scope: Consumed 4.268s CPU time.
Nov 24 12:53:59 np0005533938 systemd-logind[822]: Session 3 logged out. Waiting for processes to exit.
Nov 24 12:53:59 np0005533938 systemd-logind[822]: Removed session 3.
Nov 24 12:54:00 np0005533938 systemd-logind[822]: New session 4 of user zuul.
Nov 24 12:54:00 np0005533938 systemd[1]: Started Session 4 of User zuul.
Nov 24 12:54:01 np0005533938 python3[8092]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 24 12:54:26 np0005533938 kernel: SELinux:  Converting 385 SID table entries...
Nov 24 12:54:26 np0005533938 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 12:54:26 np0005533938 kernel: SELinux:  policy capability open_perms=1
Nov 24 12:54:26 np0005533938 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 12:54:26 np0005533938 kernel: SELinux:  policy capability always_check_network=0
Nov 24 12:54:26 np0005533938 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 12:54:26 np0005533938 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 12:54:26 np0005533938 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 12:54:38 np0005533938 kernel: SELinux:  Converting 385 SID table entries...
Nov 24 12:54:38 np0005533938 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 12:54:38 np0005533938 kernel: SELinux:  policy capability open_perms=1
Nov 24 12:54:38 np0005533938 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 12:54:38 np0005533938 kernel: SELinux:  policy capability always_check_network=0
Nov 24 12:54:38 np0005533938 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 12:54:38 np0005533938 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 12:54:38 np0005533938 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 12:54:50 np0005533938 kernel: SELinux:  Converting 385 SID table entries...
Nov 24 12:54:50 np0005533938 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 12:54:50 np0005533938 kernel: SELinux:  policy capability open_perms=1
Nov 24 12:54:50 np0005533938 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 12:54:50 np0005533938 kernel: SELinux:  policy capability always_check_network=0
Nov 24 12:54:50 np0005533938 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 12:54:50 np0005533938 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 12:54:50 np0005533938 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 12:54:54 np0005533938 setsebool[8159]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 24 12:54:54 np0005533938 setsebool[8159]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 24 12:55:07 np0005533938 kernel: SELinux:  Converting 388 SID table entries...
Nov 24 12:55:07 np0005533938 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 12:55:07 np0005533938 kernel: SELinux:  policy capability open_perms=1
Nov 24 12:55:07 np0005533938 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 12:55:07 np0005533938 kernel: SELinux:  policy capability always_check_network=0
Nov 24 12:55:07 np0005533938 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 12:55:07 np0005533938 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 12:55:07 np0005533938 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 12:55:35 np0005533938 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 24 12:55:35 np0005533938 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 12:55:35 np0005533938 systemd[1]: Starting man-db-cache-update.service...
Nov 24 12:55:35 np0005533938 systemd[1]: Reloading.
Nov 24 12:55:35 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 12:55:35 np0005533938 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 12:55:39 np0005533938 irqbalance[818]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 24 12:55:39 np0005533938 irqbalance[818]: IRQ 27 affinity is now unmanaged
Nov 24 12:55:43 np0005533938 python3[11619]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-600e-35cb-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 12:55:45 np0005533938 kernel: evm: overlay not supported
Nov 24 12:55:45 np0005533938 systemd[4302]: Starting D-Bus User Message Bus...
Nov 24 12:55:45 np0005533938 dbus-broker-launch[12345]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 24 12:55:45 np0005533938 dbus-broker-launch[12345]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 24 12:55:45 np0005533938 systemd[4302]: Started D-Bus User Message Bus.
Nov 24 12:55:45 np0005533938 dbus-broker-lau[12345]: Ready
Nov 24 12:55:45 np0005533938 systemd[4302]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 24 12:55:45 np0005533938 systemd[4302]: Created slice Slice /user.
Nov 24 12:55:45 np0005533938 systemd[4302]: podman-12029.scope: unit configures an IP firewall, but not running as root.
Nov 24 12:55:45 np0005533938 systemd[4302]: (This warning is only shown for the first unit using IP firewalling.)
Nov 24 12:55:45 np0005533938 systemd[4302]: Started podman-12029.scope.
Nov 24 12:55:45 np0005533938 systemd[4302]: Started podman-pause-96706e27.scope.
Nov 24 12:55:46 np0005533938 python3[12714]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.83:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.83:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:55:46 np0005533938 python3[12714]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 24 12:55:46 np0005533938 systemd[1]: session-4.scope: Deactivated successfully.
Nov 24 12:55:46 np0005533938 systemd[1]: session-4.scope: Consumed 1min 3.087s CPU time.
Nov 24 12:55:46 np0005533938 systemd-logind[822]: Session 4 logged out. Waiting for processes to exit.
Nov 24 12:55:46 np0005533938 systemd-logind[822]: Removed session 4.
Nov 24 12:56:08 np0005533938 systemd-logind[822]: New session 5 of user zuul.
Nov 24 12:56:08 np0005533938 systemd[1]: Started Session 5 of User zuul.
Nov 24 12:56:08 np0005533938 python3[19582]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPyJsA0slNInOFW3vXIajO+Ycf+ai01xx9++d2jFL87iEIJu8FOEeXKZ3B71uNxaMGyjhpI3Hj56b8aVGnqE46E= zuul@np0005533937.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:56:09 np0005533938 python3[19701]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPyJsA0slNInOFW3vXIajO+Ycf+ai01xx9++d2jFL87iEIJu8FOEeXKZ3B71uNxaMGyjhpI3Hj56b8aVGnqE46E= zuul@np0005533937.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:56:09 np0005533938 python3[19975]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005533938.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 24 12:56:10 np0005533938 python3[20213]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPyJsA0slNInOFW3vXIajO+Ycf+ai01xx9++d2jFL87iEIJu8FOEeXKZ3B71uNxaMGyjhpI3Hj56b8aVGnqE46E= zuul@np0005533937.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 24 12:56:10 np0005533938 python3[20404]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 12:56:11 np0005533938 python3[20625]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764006970.6174285-135-182784570807934/source _original_basename=tmpxy4406g3 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:56:12 np0005533938 python3[20857]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 24 12:56:12 np0005533938 systemd[1]: Starting Hostname Service...
Nov 24 12:56:12 np0005533938 systemd[1]: Started Hostname Service.
Nov 24 12:56:12 np0005533938 systemd-hostnamed[20913]: Changed pretty hostname to 'compute-0'
Nov 24 12:56:12 np0005533938 systemd-hostnamed[20913]: Hostname set to <compute-0> (static)
Nov 24 12:56:12 np0005533938 NetworkManager[7196]: <info>  [1764006972.3904] hostname: static hostname changed from "np0005533938.novalocal" to "compute-0"
Nov 24 12:56:12 np0005533938 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 12:56:12 np0005533938 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 12:56:12 np0005533938 systemd[1]: session-5.scope: Deactivated successfully.
Nov 24 12:56:12 np0005533938 systemd[1]: session-5.scope: Consumed 2.246s CPU time.
Nov 24 12:56:12 np0005533938 systemd-logind[822]: Session 5 logged out. Waiting for processes to exit.
Nov 24 12:56:12 np0005533938 systemd-logind[822]: Removed session 5.
Nov 24 12:56:22 np0005533938 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 12:56:39 np0005533938 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 12:56:39 np0005533938 systemd[1]: Finished man-db-cache-update.service.
Nov 24 12:56:39 np0005533938 systemd[1]: man-db-cache-update.service: Consumed 55.551s CPU time.
Nov 24 12:56:39 np0005533938 systemd[1]: run-r95d575a4792f44a4bf0a59703c9b3d3c.service: Deactivated successfully.
Nov 24 12:56:42 np0005533938 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 12:57:41 np0005533938 systemd[1]: Starting dnf makecache...
Nov 24 12:57:42 np0005533938 dnf[29918]: Failed determining last makecache time.
Nov 24 12:57:42 np0005533938 dnf[29918]: CentOS Stream 9 - BaseOS                         23 kB/s | 7.3 kB     00:00
Nov 24 12:57:42 np0005533938 dnf[29918]: CentOS Stream 9 - AppStream                      69 kB/s | 7.4 kB     00:00
Nov 24 12:57:42 np0005533938 dnf[29918]: CentOS Stream 9 - CRB                            75 kB/s | 7.2 kB     00:00
Nov 24 12:57:43 np0005533938 dnf[29918]: CentOS Stream 9 - Extras packages                73 kB/s | 8.3 kB     00:00
Nov 24 12:57:43 np0005533938 dnf[29918]: Metadata cache created.
Nov 24 12:57:43 np0005533938 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 24 12:57:43 np0005533938 systemd[1]: Finished dnf makecache.
Nov 24 12:59:41 np0005533938 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 24 12:59:41 np0005533938 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 24 12:59:41 np0005533938 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 24 12:59:41 np0005533938 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 24 12:59:51 np0005533938 systemd-logind[822]: New session 6 of user zuul.
Nov 24 12:59:51 np0005533938 systemd[1]: Started Session 6 of User zuul.
Nov 24 12:59:52 np0005533938 python3[30003]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 12:59:53 np0005533938 python3[30119]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 12:59:54 np0005533938 python3[30192]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764007193.5245514-33756-72718121001795/source mode=0755 _original_basename=delorean.repo follow=False checksum=1830be8248976a7f714fb01ca8550e92dfc79ad2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:59:54 np0005533938 python3[30218]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 12:59:54 np0005533938 python3[30291]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764007193.5245514-33756-72718121001795/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:59:55 np0005533938 python3[30317]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 12:59:55 np0005533938 python3[30390]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764007193.5245514-33756-72718121001795/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:59:55 np0005533938 python3[30416]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 12:59:56 np0005533938 python3[30489]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764007193.5245514-33756-72718121001795/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:59:56 np0005533938 python3[30515]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 12:59:56 np0005533938 python3[30588]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764007193.5245514-33756-72718121001795/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:59:56 np0005533938 python3[30614]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 12:59:57 np0005533938 python3[30687]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764007193.5245514-33756-72718121001795/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 12:59:57 np0005533938 python3[30713]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 12:59:57 np0005533938 python3[30786]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764007193.5245514-33756-72718121001795/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:00:09 np0005533938 python3[30844]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:05:09 np0005533938 systemd-logind[822]: Session 6 logged out. Waiting for processes to exit.
Nov 24 13:05:09 np0005533938 systemd[1]: session-6.scope: Deactivated successfully.
Nov 24 13:05:09 np0005533938 systemd[1]: session-6.scope: Consumed 4.426s CPU time.
Nov 24 13:05:09 np0005533938 systemd-logind[822]: Removed session 6.
Nov 24 13:10:47 np0005533938 systemd-logind[822]: New session 7 of user zuul.
Nov 24 13:10:47 np0005533938 systemd[1]: Started Session 7 of User zuul.
Nov 24 13:10:48 np0005533938 python3.9[31023]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:10:50 np0005533938 python3.9[31204]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:10:58 np0005533938 systemd[1]: session-7.scope: Deactivated successfully.
Nov 24 13:10:58 np0005533938 systemd[1]: session-7.scope: Consumed 7.648s CPU time.
Nov 24 13:10:58 np0005533938 systemd-logind[822]: Session 7 logged out. Waiting for processes to exit.
Nov 24 13:10:58 np0005533938 systemd-logind[822]: Removed session 7.
Nov 24 13:11:13 np0005533938 systemd-logind[822]: New session 8 of user zuul.
Nov 24 13:11:13 np0005533938 systemd[1]: Started Session 8 of User zuul.
Nov 24 13:11:14 np0005533938 python3.9[31415]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 24 13:11:15 np0005533938 python3.9[31589]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:11:16 np0005533938 python3.9[31741]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:11:18 np0005533938 python3.9[31895]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:11:19 np0005533938 python3.9[32047]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:11:19 np0005533938 python3.9[32199]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:11:20 np0005533938 python3.9[32322]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764007879.19873-73-277902729656581/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:11:21 np0005533938 python3.9[32474]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:11:22 np0005533938 python3.9[32630]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:11:22 np0005533938 python3.9[32782]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:11:23 np0005533938 python3.9[32932]: ansible-ansible.builtin.service_facts Invoked
Nov 24 13:11:26 np0005533938 python3.9[33185]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:11:27 np0005533938 python3.9[33335]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:11:28 np0005533938 python3.9[33489]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:11:29 np0005533938 python3.9[33647]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 13:11:30 np0005533938 python3.9[33731]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:12:16 np0005533938 systemd[1]: Reloading.
Nov 24 13:12:16 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:12:16 np0005533938 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 24 13:12:16 np0005533938 systemd[1]: Reloading.
Nov 24 13:12:17 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:12:17 np0005533938 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 24 13:12:17 np0005533938 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 24 13:12:17 np0005533938 systemd[1]: Reloading.
Nov 24 13:12:17 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:12:17 np0005533938 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 24 13:12:17 np0005533938 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 24 13:12:17 np0005533938 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 24 13:12:17 np0005533938 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 24 13:13:20 np0005533938 kernel: SELinux:  Converting 2718 SID table entries...
Nov 24 13:13:20 np0005533938 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 13:13:20 np0005533938 kernel: SELinux:  policy capability open_perms=1
Nov 24 13:13:20 np0005533938 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 13:13:20 np0005533938 kernel: SELinux:  policy capability always_check_network=0
Nov 24 13:13:20 np0005533938 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 13:13:20 np0005533938 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 13:13:20 np0005533938 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 13:13:21 np0005533938 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 24 13:13:21 np0005533938 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 13:13:21 np0005533938 systemd[1]: Starting man-db-cache-update.service...
Nov 24 13:13:21 np0005533938 systemd[1]: Reloading.
Nov 24 13:13:21 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:13:21 np0005533938 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 13:13:22 np0005533938 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 13:13:22 np0005533938 systemd[1]: Finished man-db-cache-update.service.
Nov 24 13:13:22 np0005533938 systemd[1]: man-db-cache-update.service: Consumed 1.069s CPU time.
Nov 24 13:13:22 np0005533938 systemd[1]: run-reb42ff0518594decbb993a17ec7f9b20.service: Deactivated successfully.
Nov 24 13:13:22 np0005533938 python3.9[35233]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:13:24 np0005533938 python3.9[35515]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 24 13:13:25 np0005533938 python3.9[35667]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 24 13:13:27 np0005533938 python3.9[35820]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:13:28 np0005533938 python3.9[35972]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 24 13:13:29 np0005533938 python3.9[36124]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:13:30 np0005533938 python3.9[36276]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:13:30 np0005533938 python3.9[36399]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008009.8179655-236-83641516892809/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4453bc72f5dea8ea952ecd01786d1a0544923cc0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:13:31 np0005533938 python3.9[36551]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:13:35 np0005533938 python3.9[36703]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:13:35 np0005533938 python3.9[36858]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:13:36 np0005533938 python3.9[37010]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 24 13:13:36 np0005533938 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 13:13:37 np0005533938 python3.9[37164]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 13:13:38 np0005533938 python3.9[37322]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 13:13:39 np0005533938 python3.9[37482]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 24 13:13:39 np0005533938 python3.9[37635]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 13:13:40 np0005533938 python3.9[37793]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 24 13:13:41 np0005533938 python3.9[37945]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:13:43 np0005533938 python3.9[38099]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:13:44 np0005533938 python3.9[38251]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:13:44 np0005533938 python3.9[38374]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764008023.7231472-355-262813525443770/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:13:45 np0005533938 python3.9[38526]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 13:13:45 np0005533938 systemd[1]: Starting Load Kernel Modules...
Nov 24 13:13:46 np0005533938 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 24 13:13:46 np0005533938 kernel: Bridge firewalling registered
Nov 24 13:13:46 np0005533938 systemd-modules-load[38530]: Inserted module 'br_netfilter'
Nov 24 13:13:46 np0005533938 systemd[1]: Finished Load Kernel Modules.
Nov 24 13:13:46 np0005533938 python3.9[38686]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:13:47 np0005533938 python3.9[38809]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764008026.1679487-378-193958592371871/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:13:47 np0005533938 python3.9[38961]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:13:50 np0005533938 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 24 13:13:51 np0005533938 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 24 13:13:51 np0005533938 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 13:13:51 np0005533938 systemd[1]: Starting man-db-cache-update.service...
Nov 24 13:13:51 np0005533938 systemd[1]: Reloading.
Nov 24 13:13:51 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:13:51 np0005533938 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 13:13:52 np0005533938 python3.9[40231]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:13:53 np0005533938 python3.9[41371]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 24 13:13:54 np0005533938 python3.9[42061]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:13:54 np0005533938 python3.9[42797]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:13:55 np0005533938 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 24 13:13:55 np0005533938 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 13:13:55 np0005533938 systemd[1]: Finished man-db-cache-update.service.
Nov 24 13:13:55 np0005533938 systemd[1]: man-db-cache-update.service: Consumed 4.829s CPU time.
Nov 24 13:13:55 np0005533938 systemd[1]: run-re355a0c3f3f64fae8f5d5c04c3d54460.service: Deactivated successfully.
Nov 24 13:13:55 np0005533938 systemd[1]: Starting Authorization Manager...
Nov 24 13:13:55 np0005533938 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 24 13:13:55 np0005533938 polkitd[43339]: Started polkitd version 0.117
Nov 24 13:13:55 np0005533938 systemd[1]: Started Authorization Manager.
Nov 24 13:13:56 np0005533938 python3.9[43509]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:13:56 np0005533938 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 24 13:13:56 np0005533938 systemd[1]: tuned.service: Deactivated successfully.
Nov 24 13:13:56 np0005533938 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 24 13:13:56 np0005533938 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 24 13:13:56 np0005533938 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 24 13:13:57 np0005533938 python3.9[43671]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 24 13:13:59 np0005533938 python3.9[43823]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:13:59 np0005533938 systemd[1]: Reloading.
Nov 24 13:13:59 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:14:00 np0005533938 python3.9[44012]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:14:00 np0005533938 systemd[1]: Reloading.
Nov 24 13:14:00 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:14:01 np0005533938 python3.9[44201]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:14:02 np0005533938 python3.9[44354]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:14:02 np0005533938 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 24 13:14:02 np0005533938 python3.9[44507]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:14:04 np0005533938 python3.9[44669]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:14:05 np0005533938 python3.9[44822]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 13:14:05 np0005533938 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 24 13:14:05 np0005533938 systemd[1]: Stopped Apply Kernel Variables.
Nov 24 13:14:05 np0005533938 systemd[1]: Stopping Apply Kernel Variables...
Nov 24 13:14:05 np0005533938 systemd[1]: Starting Apply Kernel Variables...
Nov 24 13:14:05 np0005533938 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 24 13:14:05 np0005533938 systemd[1]: Finished Apply Kernel Variables.
Nov 24 13:14:05 np0005533938 systemd-logind[822]: Session 8 logged out. Waiting for processes to exit.
Nov 24 13:14:05 np0005533938 systemd[1]: session-8.scope: Deactivated successfully.
Nov 24 13:14:05 np0005533938 systemd[1]: session-8.scope: Consumed 2min 7.797s CPU time.
Nov 24 13:14:05 np0005533938 systemd-logind[822]: Removed session 8.
Nov 24 13:14:12 np0005533938 systemd-logind[822]: New session 9 of user zuul.
Nov 24 13:14:12 np0005533938 systemd[1]: Started Session 9 of User zuul.
Nov 24 13:14:13 np0005533938 python3.9[45005]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:14:14 np0005533938 python3.9[45161]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 24 13:14:15 np0005533938 python3.9[45314]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 13:14:16 np0005533938 python3.9[45472]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 13:14:17 np0005533938 python3.9[45632]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 13:14:18 np0005533938 python3.9[45716]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 13:14:21 np0005533938 python3.9[45879]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:14:32 np0005533938 kernel: SELinux:  Converting 2730 SID table entries...
Nov 24 13:14:32 np0005533938 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 13:14:32 np0005533938 kernel: SELinux:  policy capability open_perms=1
Nov 24 13:14:32 np0005533938 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 13:14:32 np0005533938 kernel: SELinux:  policy capability always_check_network=0
Nov 24 13:14:32 np0005533938 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 13:14:32 np0005533938 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 13:14:32 np0005533938 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 13:14:32 np0005533938 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 24 13:14:32 np0005533938 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 24 13:14:34 np0005533938 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 13:14:34 np0005533938 systemd[1]: Starting man-db-cache-update.service...
Nov 24 13:14:34 np0005533938 systemd[1]: Reloading.
Nov 24 13:14:34 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:14:34 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:14:34 np0005533938 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 13:14:35 np0005533938 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 13:14:35 np0005533938 systemd[1]: Finished man-db-cache-update.service.
Nov 24 13:14:35 np0005533938 systemd[1]: run-r140e3154bdca4ca182f17759746d38cb.service: Deactivated successfully.
Nov 24 13:14:35 np0005533938 python3.9[46977]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 13:14:36 np0005533938 systemd[1]: Reloading.
Nov 24 13:14:36 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:14:36 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:14:36 np0005533938 systemd[1]: Starting Open vSwitch Database Unit...
Nov 24 13:14:36 np0005533938 chown[47019]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 24 13:14:36 np0005533938 ovs-ctl[47024]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 24 13:14:36 np0005533938 ovs-ctl[47024]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 24 13:14:36 np0005533938 ovs-ctl[47024]: Starting ovsdb-server [  OK  ]
Nov 24 13:14:36 np0005533938 ovs-vsctl[47073]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 24 13:14:36 np0005533938 ovs-vsctl[47093]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"302e9f34-0427-4ff9-a29b-2fc7b5250666\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 24 13:14:36 np0005533938 ovs-ctl[47024]: Configuring Open vSwitch system IDs [  OK  ]
Nov 24 13:14:36 np0005533938 ovs-vsctl[47099]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 24 13:14:36 np0005533938 ovs-ctl[47024]: Enabling remote OVSDB managers [  OK  ]
Nov 24 13:14:36 np0005533938 systemd[1]: Started Open vSwitch Database Unit.
Nov 24 13:14:36 np0005533938 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 24 13:14:36 np0005533938 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 24 13:14:36 np0005533938 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 24 13:14:36 np0005533938 kernel: openvswitch: Open vSwitch switching datapath
Nov 24 13:14:36 np0005533938 ovs-ctl[47143]: Inserting openvswitch module [  OK  ]
Nov 24 13:14:36 np0005533938 ovs-ctl[47112]: Starting ovs-vswitchd [  OK  ]
Nov 24 13:14:36 np0005533938 ovs-vsctl[47161]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 24 13:14:36 np0005533938 ovs-ctl[47112]: Enabling remote OVSDB managers [  OK  ]
Nov 24 13:14:36 np0005533938 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 24 13:14:36 np0005533938 systemd[1]: Starting Open vSwitch...
Nov 24 13:14:36 np0005533938 systemd[1]: Finished Open vSwitch.
Nov 24 13:14:37 np0005533938 python3.9[47312]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:14:38 np0005533938 python3.9[47464]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 24 13:14:39 np0005533938 kernel: SELinux:  Converting 2744 SID table entries...
Nov 24 13:14:39 np0005533938 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 13:14:39 np0005533938 kernel: SELinux:  policy capability open_perms=1
Nov 24 13:14:39 np0005533938 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 13:14:39 np0005533938 kernel: SELinux:  policy capability always_check_network=0
Nov 24 13:14:39 np0005533938 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 13:14:39 np0005533938 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 13:14:39 np0005533938 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 13:14:40 np0005533938 python3.9[47619]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:14:41 np0005533938 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 24 13:14:41 np0005533938 python3.9[47777]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:14:43 np0005533938 python3.9[47930]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:14:45 np0005533938 python3.9[48217]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 13:14:45 np0005533938 python3.9[48367]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:14:46 np0005533938 python3.9[48521]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:14:48 np0005533938 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 13:14:48 np0005533938 systemd[1]: Starting man-db-cache-update.service...
Nov 24 13:14:48 np0005533938 systemd[1]: Reloading.
Nov 24 13:14:48 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:14:48 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:14:48 np0005533938 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 13:14:48 np0005533938 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 13:14:48 np0005533938 systemd[1]: Finished man-db-cache-update.service.
Nov 24 13:14:48 np0005533938 systemd[1]: run-rae6eb5a6ad0a418980af5d303af13673.service: Deactivated successfully.
Nov 24 13:14:49 np0005533938 python3.9[48838]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 13:14:49 np0005533938 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 24 13:14:49 np0005533938 systemd[1]: Stopped Network Manager Wait Online.
Nov 24 13:14:49 np0005533938 systemd[1]: Stopping Network Manager Wait Online...
Nov 24 13:14:49 np0005533938 NetworkManager[7196]: <info>  [1764008089.7578] caught SIGTERM, shutting down normally.
Nov 24 13:14:49 np0005533938 systemd[1]: Stopping Network Manager...
Nov 24 13:14:49 np0005533938 NetworkManager[7196]: <info>  [1764008089.7603] dhcp4 (eth0): canceled DHCP transaction
Nov 24 13:14:49 np0005533938 NetworkManager[7196]: <info>  [1764008089.7603] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 13:14:49 np0005533938 NetworkManager[7196]: <info>  [1764008089.7604] dhcp4 (eth0): state changed no lease
Nov 24 13:14:49 np0005533938 NetworkManager[7196]: <info>  [1764008089.7609] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 13:14:49 np0005533938 NetworkManager[7196]: <info>  [1764008089.7725] exiting (success)
Nov 24 13:14:49 np0005533938 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 13:14:49 np0005533938 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 13:14:49 np0005533938 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 24 13:14:49 np0005533938 systemd[1]: Stopped Network Manager.
Nov 24 13:14:49 np0005533938 systemd[1]: NetworkManager.service: Consumed 9.934s CPU time, 4.1M memory peak, read 0B from disk, written 30.0K to disk.
Nov 24 13:14:49 np0005533938 systemd[1]: Starting Network Manager...
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.8508] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:c726fd3c-29d8-43c4-9498-0fb31e19789a)
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.8509] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.8570] manager[0x55e422c56090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 24 13:14:49 np0005533938 systemd[1]: Starting Hostname Service...
Nov 24 13:14:49 np0005533938 systemd[1]: Started Hostname Service.
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9694] hostname: hostname: using hostnamed
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9696] hostname: static hostname changed from (none) to "compute-0"
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9701] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9707] manager[0x55e422c56090]: rfkill: Wi-Fi hardware radio set enabled
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9707] manager[0x55e422c56090]: rfkill: WWAN hardware radio set enabled
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9729] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9740] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9740] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9741] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9742] manager: Networking is enabled by state file
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9744] settings: Loaded settings plugin: keyfile (internal)
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9747] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9772] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9786] dhcp: init: Using DHCP client 'internal'
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9789] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9795] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9800] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9807] device (lo): Activation: starting connection 'lo' (5922deac-6043-4983-8df6-40dbc8abd7af)
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9812] device (eth0): carrier: link connected
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9816] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9820] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9820] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9825] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9829] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9833] device (eth1): carrier: link connected
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9836] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9840] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (730e1bbf-c4c7-52c0-85e9-2379c2b50bf6) (indicated)
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9841] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9844] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9848] device (eth1): Activation: starting connection 'ci-private-network' (730e1bbf-c4c7-52c0-85e9-2379c2b50bf6)
Nov 24 13:14:49 np0005533938 systemd[1]: Started Network Manager.
Nov 24 13:14:49 np0005533938 NetworkManager[48851]: <info>  [1764008089.9855] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0322] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0329] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0333] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0338] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0344] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0350] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0356] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0375] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0387] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0392] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0407] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 24 13:14:50 np0005533938 systemd[1]: Starting Network Manager Wait Online...
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0441] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0455] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0459] dhcp4 (eth0): state changed new lease, address=38.102.83.27
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0464] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0473] device (lo): Activation: successful, device activated.
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0490] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0576] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0612] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0622] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0627] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0632] device (eth1): Activation: successful, device activated.
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0673] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0676] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0683] manager: NetworkManager state is now CONNECTED_SITE
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0689] device (eth0): Activation: successful, device activated.
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0696] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 24 13:14:50 np0005533938 NetworkManager[48851]: <info>  [1764008090.0730] manager: startup complete
Nov 24 13:14:50 np0005533938 systemd[1]: Finished Network Manager Wait Online.
Nov 24 13:14:50 np0005533938 python3.9[49064]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:14:54 np0005533938 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 13:14:54 np0005533938 systemd[1]: Starting man-db-cache-update.service...
Nov 24 13:14:54 np0005533938 systemd[1]: Reloading.
Nov 24 13:14:55 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:14:55 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:14:55 np0005533938 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 13:14:55 np0005533938 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 13:14:55 np0005533938 systemd[1]: Finished man-db-cache-update.service.
Nov 24 13:14:55 np0005533938 systemd[1]: run-r5a920b9428bd4ecdbe852baff3dac7b9.service: Deactivated successfully.
Nov 24 13:14:56 np0005533938 python3.9[49522]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:14:57 np0005533938 python3.9[49674]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:14:58 np0005533938 python3.9[49828]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:14:58 np0005533938 python3.9[49980]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:14:59 np0005533938 python3.9[50132]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:14:59 np0005533938 python3.9[50284]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:15:00 np0005533938 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 13:15:00 np0005533938 python3.9[50436]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:15:01 np0005533938 python3.9[50559]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764008100.156014-229-101920566833729/.source _original_basename=.6ce8bbu4 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:15:01 np0005533938 python3.9[50711]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:15:02 np0005533938 python3.9[50863]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 24 13:15:03 np0005533938 python3.9[51015]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:15:05 np0005533938 python3.9[51442]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 24 13:15:06 np0005533938 ansible-async_wrapper.py[51617]: Invoked with j512682444113 300 /home/zuul/.ansible/tmp/ansible-tmp-1764008105.4671092-295-128670388362531/AnsiballZ_edpm_os_net_config.py _
Nov 24 13:15:06 np0005533938 ansible-async_wrapper.py[51620]: Starting module and watcher
Nov 24 13:15:06 np0005533938 ansible-async_wrapper.py[51620]: Start watching 51621 (300)
Nov 24 13:15:06 np0005533938 ansible-async_wrapper.py[51621]: Start module (51621)
Nov 24 13:15:06 np0005533938 ansible-async_wrapper.py[51617]: Return async_wrapper task started.
Nov 24 13:15:06 np0005533938 python3.9[51622]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 24 13:15:07 np0005533938 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 24 13:15:07 np0005533938 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 24 13:15:07 np0005533938 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 24 13:15:07 np0005533938 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 24 13:15:07 np0005533938 kernel: cfg80211: failed to load regulatory.db
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.1892] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51623 uid=0 result="success"
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.1905] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51623 uid=0 result="success"
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2360] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2361] audit: op="connection-add" uuid="285580ad-f048-411d-8e01-fe54e62f2276" name="br-ex-br" pid=51623 uid=0 result="success"
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2374] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2375] audit: op="connection-add" uuid="c11266a8-6fc0-4266-85b5-dcae315789b7" name="br-ex-port" pid=51623 uid=0 result="success"
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2385] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2386] audit: op="connection-add" uuid="86720774-b112-455d-806c-5f7854e8dc94" name="eth1-port" pid=51623 uid=0 result="success"
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2395] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2396] audit: op="connection-add" uuid="df4b830c-9e1a-47b8-baf9-c21721e8e040" name="vlan20-port" pid=51623 uid=0 result="success"
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2405] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2406] audit: op="connection-add" uuid="da5153fc-4229-4827-94ac-6e1ee8a20568" name="vlan21-port" pid=51623 uid=0 result="success"
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2415] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2416] audit: op="connection-add" uuid="4abca62d-4175-45c3-a1b7-18a7b1fadbb9" name="vlan22-port" pid=51623 uid=0 result="success"
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2425] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2426] audit: op="connection-add" uuid="d61e71f7-e4b3-4117-b21e-87d15b0a9b91" name="vlan23-port" pid=51623 uid=0 result="success"
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2442] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu" pid=51623 uid=0 result="success"
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2455] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2456] audit: op="connection-add" uuid="b3bc3db1-b67b-4fd3-8d15-af197881bb15" name="br-ex-if" pid=51623 uid=0 result="success"
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2495] audit: op="connection-update" uuid="730e1bbf-c4c7-52c0-85e9-2379c2b50bf6" name="ci-private-network" args="connection.timestamp,connection.slave-type,connection.master,connection.port-type,connection.controller,ipv6.routes,ipv6.addr-gen-mode,ipv6.method,ipv6.dns,ipv6.addresses,ipv6.routing-rules,ipv4.never-default,ipv4.method,ipv4.routes,ipv4.dns,ipv4.addresses,ipv4.routing-rules,ovs-interface.type,ovs-external-ids.data" pid=51623 uid=0 result="success"
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2508] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2509] audit: op="connection-add" uuid="e302bbdb-383e-4265-9522-035305242aca" name="vlan20-if" pid=51623 uid=0 result="success"
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2522] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2523] audit: op="connection-add" uuid="670ed63b-73be-486b-b4bf-95961f23ffe4" name="vlan21-if" pid=51623 uid=0 result="success"
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2535] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2537] audit: op="connection-add" uuid="1cdae8a7-f917-4bae-ab10-c6c13f970a21" name="vlan22-if" pid=51623 uid=0 result="success"
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2549] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2550] audit: op="connection-add" uuid="bb95f981-dffe-46e6-bbd1-952e1af482b5" name="vlan23-if" pid=51623 uid=0 result="success"
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2559] audit: op="connection-delete" uuid="3cf5caf6-dae0-3e12-91e8-cbb71d516e93" name="Wired connection 1" pid=51623 uid=0 result="success"
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2568] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2575] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2578] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (285580ad-f048-411d-8e01-fe54e62f2276)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2578] audit: op="connection-activate" uuid="285580ad-f048-411d-8e01-fe54e62f2276" name="br-ex-br" pid=51623 uid=0 result="success"
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2580] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2585] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2587] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (c11266a8-6fc0-4266-85b5-dcae315789b7)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2588] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2592] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2594] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (86720774-b112-455d-806c-5f7854e8dc94)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2596] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2600] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2602] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (df4b830c-9e1a-47b8-baf9-c21721e8e040)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2604] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2608] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2612] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (da5153fc-4229-4827-94ac-6e1ee8a20568)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2613] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2617] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2619] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (4abca62d-4175-45c3-a1b7-18a7b1fadbb9)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2620] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2625] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2627] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (d61e71f7-e4b3-4117-b21e-87d15b0a9b91)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2628] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2630] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2631] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2635] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2638] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2641] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (b3bc3db1-b67b-4fd3-8d15-af197881bb15)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2641] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2643] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2644] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2645] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2646] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2653] device (eth1): disconnecting for new activation request.
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2653] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2655] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2656] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2657] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2659] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2661] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2664] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (e302bbdb-383e-4265-9522-035305242aca)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2665] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2666] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2667] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2668] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2670] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2672] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2675] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (670ed63b-73be-486b-b4bf-95961f23ffe4)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2676] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2677] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2678] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2679] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2681] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2683] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2686] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (1cdae8a7-f917-4bae-ab10-c6c13f970a21)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2686] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2688] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2689] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2690] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2692] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2696] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2700] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (bb95f981-dffe-46e6-bbd1-952e1af482b5)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2701] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2704] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2706] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2707] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2709] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2721] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu" pid=51623 uid=0 result="success"
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2724] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2727] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2729] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2735] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2741] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2746] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2750] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2752] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2758] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 kernel: ovs-system: entered promiscuous mode
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2764] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2768] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2770] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2776] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 kernel: Timeout policy base is empty
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2782] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2786] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2788] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2793] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 systemd-udevd[51629]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2798] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2802] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2803] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2808] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2813] dhcp4 (eth0): canceled DHCP transaction
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2813] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2813] dhcp4 (eth0): state changed no lease
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2815] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2825] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2829] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51623 uid=0 result="fail" reason="Device is not activated"
Nov 24 13:15:08 np0005533938 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2917] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2920] dhcp4 (eth0): state changed new lease, address=38.102.83.27
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2928] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.2983] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 24 13:15:08 np0005533938 kernel: br-ex: entered promiscuous mode
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3154] device (eth1): Activation: starting connection 'ci-private-network' (730e1bbf-c4c7-52c0-85e9-2379c2b50bf6)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3164] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3169] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3185] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 24 13:15:08 np0005533938 systemd-udevd[51628]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 13:15:08 np0005533938 kernel: vlan22: entered promiscuous mode
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3191] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3194] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3203] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3214] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3226] device (eth1): state change: config -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3230] device (eth1): released from controller device eth1
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3237] device (eth1): disconnecting for new activation request.
Nov 24 13:15:08 np0005533938 kernel: vlan20: entered promiscuous mode
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3237] audit: op="connection-activate" uuid="730e1bbf-c4c7-52c0-85e9-2379c2b50bf6" name="ci-private-network" pid=51623 uid=0 result="success"
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3238] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3239] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3240] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3241] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3242] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3243] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 systemd-udevd[51627]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3249] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3253] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3258] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3266] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3272] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3277] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3281] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3287] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3294] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 kernel: vlan21: entered promiscuous mode
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3299] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3311] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3317] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3356] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3357] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51623 uid=0 result="success"
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3358] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3365] device (eth1): Activation: starting connection 'ci-private-network' (730e1bbf-c4c7-52c0-85e9-2379c2b50bf6)
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3376] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3393] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3397] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 kernel: vlan23: entered promiscuous mode
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3425] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 24 13:15:08 np0005533938 systemd-udevd[51729]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3435] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3447] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3456] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3469] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3475] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3488] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3501] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3509] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3565] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3566] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3568] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3569] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3570] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3575] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3582] device (eth1): Activation: successful, device activated.
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3587] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3593] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3599] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3606] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3612] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3617] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3623] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3630] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3642] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3655] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3712] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3713] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 24 13:15:08 np0005533938 NetworkManager[48851]: <info>  [1764008108.3718] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 24 13:15:09 np0005533938 NetworkManager[48851]: <info>  [1764008109.5298] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51623 uid=0 result="success"
Nov 24 13:15:09 np0005533938 NetworkManager[48851]: <info>  [1764008109.7217] checkpoint[0x55e422c2c950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 24 13:15:09 np0005533938 NetworkManager[48851]: <info>  [1764008109.7221] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51623 uid=0 result="success"
Nov 24 13:15:10 np0005533938 NetworkManager[48851]: <info>  [1764008110.0191] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51623 uid=0 result="success"
Nov 24 13:15:10 np0005533938 NetworkManager[48851]: <info>  [1764008110.0202] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51623 uid=0 result="success"
Nov 24 13:15:10 np0005533938 python3.9[51988]: ansible-ansible.legacy.async_status Invoked with jid=j512682444113.51617 mode=status _async_dir=/root/.ansible_async
Nov 24 13:15:10 np0005533938 NetworkManager[48851]: <info>  [1764008110.2308] audit: op="networking-control" arg="global-dns-configuration" pid=51623 uid=0 result="success"
Nov 24 13:15:10 np0005533938 NetworkManager[48851]: <info>  [1764008110.2372] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 24 13:15:10 np0005533938 NetworkManager[48851]: <info>  [1764008110.2451] audit: op="networking-control" arg="global-dns-configuration" pid=51623 uid=0 result="success"
Nov 24 13:15:10 np0005533938 NetworkManager[48851]: <info>  [1764008110.2472] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51623 uid=0 result="success"
Nov 24 13:15:10 np0005533938 NetworkManager[48851]: <info>  [1764008110.3838] checkpoint[0x55e422c2ca20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 24 13:15:10 np0005533938 NetworkManager[48851]: <info>  [1764008110.3843] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51623 uid=0 result="success"
Nov 24 13:15:10 np0005533938 ansible-async_wrapper.py[51621]: Module complete (51621)
Nov 24 13:15:11 np0005533938 ansible-async_wrapper.py[51620]: Done in kid B.
Nov 24 13:15:13 np0005533938 python3.9[52092]: ansible-ansible.legacy.async_status Invoked with jid=j512682444113.51617 mode=status _async_dir=/root/.ansible_async
Nov 24 13:15:14 np0005533938 python3.9[52192]: ansible-ansible.legacy.async_status Invoked with jid=j512682444113.51617 mode=cleanup _async_dir=/root/.ansible_async
Nov 24 13:15:14 np0005533938 python3.9[52344]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:15:15 np0005533938 python3.9[52467]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764008114.2995408-322-84221145183548/.source.returncode _original_basename=.sz_ajvkl follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:15:16 np0005533938 python3.9[52619]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:15:16 np0005533938 python3.9[52742]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764008115.589591-338-99110265194168/.source.cfg _original_basename=.lgqr30m9 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:15:17 np0005533938 python3.9[52895]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 13:15:17 np0005533938 systemd[1]: Reloading Network Manager...
Nov 24 13:15:17 np0005533938 NetworkManager[48851]: <info>  [1764008117.3540] audit: op="reload" arg="0" pid=52899 uid=0 result="success"
Nov 24 13:15:17 np0005533938 NetworkManager[48851]: <info>  [1764008117.3545] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 24 13:15:17 np0005533938 systemd[1]: Reloaded Network Manager.
Nov 24 13:15:17 np0005533938 systemd[1]: session-9.scope: Deactivated successfully.
Nov 24 13:15:17 np0005533938 systemd[1]: session-9.scope: Consumed 47.179s CPU time.
Nov 24 13:15:17 np0005533938 systemd-logind[822]: Session 9 logged out. Waiting for processes to exit.
Nov 24 13:15:17 np0005533938 systemd-logind[822]: Removed session 9.
Nov 24 13:15:19 np0005533938 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 24 13:15:24 np0005533938 systemd-logind[822]: New session 10 of user zuul.
Nov 24 13:15:24 np0005533938 systemd[1]: Started Session 10 of User zuul.
Nov 24 13:15:25 np0005533938 python3.9[53085]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:15:26 np0005533938 python3.9[53239]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 13:15:27 np0005533938 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 24 13:15:27 np0005533938 python3.9[53433]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:15:27 np0005533938 systemd[1]: session-10.scope: Deactivated successfully.
Nov 24 13:15:27 np0005533938 systemd[1]: session-10.scope: Consumed 2.430s CPU time.
Nov 24 13:15:27 np0005533938 systemd-logind[822]: Session 10 logged out. Waiting for processes to exit.
Nov 24 13:15:27 np0005533938 systemd-logind[822]: Removed session 10.
Nov 24 13:15:33 np0005533938 systemd-logind[822]: New session 11 of user zuul.
Nov 24 13:15:33 np0005533938 systemd[1]: Started Session 11 of User zuul.
Nov 24 13:15:34 np0005533938 python3.9[53615]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:15:35 np0005533938 python3.9[53769]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:15:36 np0005533938 python3.9[53926]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 13:15:37 np0005533938 python3.9[54010]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:15:39 np0005533938 python3.9[54164]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 13:15:40 np0005533938 python3.9[54359]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:15:41 np0005533938 python3.9[54511]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:15:41 np0005533938 systemd[1]: var-lib-containers-storage-overlay-compat3724688552-merged.mount: Deactivated successfully.
Nov 24 13:15:41 np0005533938 podman[54512]: 2025-11-24 18:15:41.510039778 +0000 UTC m=+0.043755514 system refresh
Nov 24 13:15:42 np0005533938 python3.9[54674]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:15:42 np0005533938 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 13:15:42 np0005533938 python3.9[54797]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008141.7001703-79-250137618827268/.source.json follow=False _original_basename=podman_network_config.j2 checksum=d67b1c249ab97334a6ce0bba856dd73ecc527dd8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:15:43 np0005533938 python3.9[54949]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:15:44 np0005533938 python3.9[55072]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764008143.1383765-94-269533408929699/.source.conf follow=False _original_basename=registries.conf.j2 checksum=97513ee69a4b3dc3c4fd06acbbcaa9a991e77aee backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:15:44 np0005533938 python3.9[55224]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:15:45 np0005533938 python3.9[55376]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:15:45 np0005533938 python3.9[55528]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:15:46 np0005533938 python3.9[55680]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:15:47 np0005533938 python3.9[55832]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:15:49 np0005533938 python3.9[55985]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:15:49 np0005533938 python3.9[56139]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:15:50 np0005533938 python3.9[56291]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:15:51 np0005533938 python3.9[56443]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:15:52 np0005533938 python3.9[56596]: ansible-service_facts Invoked
Nov 24 13:15:52 np0005533938 network[56613]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 13:15:52 np0005533938 network[56614]: 'network-scripts' will be removed from distribution in near future.
Nov 24 13:15:52 np0005533938 network[56615]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 13:15:56 np0005533938 python3.9[57067]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:16:00 np0005533938 python3.9[57221]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 24 13:16:01 np0005533938 python3.9[57373]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:16:02 np0005533938 python3.9[57498]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764008161.2220657-238-220742480729490/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:16:02 np0005533938 python3.9[57652]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:16:03 np0005533938 python3.9[57777]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764008162.4424393-253-139876111031936/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:16:04 np0005533938 python3.9[57931]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:16:05 np0005533938 python3.9[58085]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 13:16:06 np0005533938 python3.9[58169]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:16:07 np0005533938 python3.9[58323]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 13:16:08 np0005533938 python3.9[58407]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 13:16:08 np0005533938 chronyd[831]: chronyd exiting
Nov 24 13:16:08 np0005533938 systemd[1]: Stopping NTP client/server...
Nov 24 13:16:08 np0005533938 systemd[1]: chronyd.service: Deactivated successfully.
Nov 24 13:16:08 np0005533938 systemd[1]: Stopped NTP client/server.
Nov 24 13:16:08 np0005533938 systemd[1]: Starting NTP client/server...
Nov 24 13:16:08 np0005533938 chronyd[58415]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 24 13:16:08 np0005533938 chronyd[58415]: Frequency -24.803 +/- 0.135 ppm read from /var/lib/chrony/drift
Nov 24 13:16:08 np0005533938 chronyd[58415]: Loaded seccomp filter (level 2)
Nov 24 13:16:08 np0005533938 systemd[1]: Started NTP client/server.
Nov 24 13:16:08 np0005533938 systemd[1]: session-11.scope: Deactivated successfully.
Nov 24 13:16:08 np0005533938 systemd[1]: session-11.scope: Consumed 24.043s CPU time.
Nov 24 13:16:08 np0005533938 systemd-logind[822]: Session 11 logged out. Waiting for processes to exit.
Nov 24 13:16:08 np0005533938 systemd-logind[822]: Removed session 11.
Nov 24 13:16:14 np0005533938 systemd-logind[822]: New session 12 of user zuul.
Nov 24 13:16:14 np0005533938 systemd[1]: Started Session 12 of User zuul.
Nov 24 13:16:15 np0005533938 python3.9[58596]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:16:16 np0005533938 python3.9[58748]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:16:17 np0005533938 python3.9[58871]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764008175.8267186-34-146922593841782/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:16:17 np0005533938 systemd[1]: session-12.scope: Deactivated successfully.
Nov 24 13:16:17 np0005533938 systemd[1]: session-12.scope: Consumed 1.534s CPU time.
Nov 24 13:16:17 np0005533938 systemd-logind[822]: Session 12 logged out. Waiting for processes to exit.
Nov 24 13:16:17 np0005533938 systemd-logind[822]: Removed session 12.
Nov 24 13:16:24 np0005533938 systemd-logind[822]: New session 13 of user zuul.
Nov 24 13:16:24 np0005533938 systemd[1]: Started Session 13 of User zuul.
Nov 24 13:16:25 np0005533938 python3.9[59050]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:16:26 np0005533938 python3.9[59206]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:16:27 np0005533938 python3.9[59381]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:16:28 np0005533938 python3.9[59504]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764008186.6649513-41-220773230077118/.source.json _original_basename=.vze46rqq follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:16:28 np0005533938 python3.9[59656]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:16:29 np0005533938 python3.9[59779]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764008188.3607852-64-281176437560661/.source _original_basename=.peo0bio8 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:16:30 np0005533938 python3.9[59931]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:16:30 np0005533938 python3.9[60084]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:16:31 np0005533938 python3.9[60207]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764008190.2142744-88-93909974305958/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:16:31 np0005533938 python3.9[60359]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:16:32 np0005533938 python3.9[60482]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764008191.3352098-88-247569987972936/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:16:33 np0005533938 python3.9[60634]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:16:33 np0005533938 python3.9[60786]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:16:34 np0005533938 python3.9[60909]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008193.2471132-125-51856586266958/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:16:34 np0005533938 python3.9[61061]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:16:35 np0005533938 python3.9[61184]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008194.3182425-140-198818339648927/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:16:36 np0005533938 python3.9[61336]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:16:36 np0005533938 systemd[1]: Reloading.
Nov 24 13:16:36 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:16:36 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:16:36 np0005533938 systemd[1]: Reloading.
Nov 24 13:16:36 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:16:36 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:16:36 np0005533938 systemd[1]: Starting EDPM Container Shutdown...
Nov 24 13:16:36 np0005533938 systemd[1]: Finished EDPM Container Shutdown.
Nov 24 13:16:37 np0005533938 python3.9[61564]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:16:37 np0005533938 python3.9[61687]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008197.0021737-163-120840986693446/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:16:38 np0005533938 python3.9[61839]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:16:39 np0005533938 python3.9[61962]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008198.0969722-178-151704414447763/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:16:39 np0005533938 python3.9[62114]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:16:39 np0005533938 systemd[1]: Reloading.
Nov 24 13:16:39 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:16:39 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:16:40 np0005533938 systemd[1]: Reloading.
Nov 24 13:16:40 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:16:40 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:16:40 np0005533938 systemd[1]: Starting Create netns directory...
Nov 24 13:16:40 np0005533938 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 13:16:40 np0005533938 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 13:16:40 np0005533938 systemd[1]: Finished Create netns directory.
Nov 24 13:16:41 np0005533938 python3.9[62339]: ansible-ansible.builtin.service_facts Invoked
Nov 24 13:16:41 np0005533938 network[62356]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 13:16:41 np0005533938 network[62357]: 'network-scripts' will be removed from distribution in near future.
Nov 24 13:16:41 np0005533938 network[62358]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 13:16:45 np0005533938 python3.9[62620]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:16:45 np0005533938 systemd[1]: Reloading.
Nov 24 13:16:45 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:16:45 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:16:45 np0005533938 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 24 13:16:45 np0005533938 iptables.init[62659]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 24 13:16:45 np0005533938 iptables.init[62659]: iptables: Flushing firewall rules: [  OK  ]
Nov 24 13:16:45 np0005533938 systemd[1]: iptables.service: Deactivated successfully.
Nov 24 13:16:45 np0005533938 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 24 13:16:46 np0005533938 python3.9[62856]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:16:47 np0005533938 python3.9[63010]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:16:47 np0005533938 systemd[1]: Reloading.
Nov 24 13:16:47 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:16:47 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:16:47 np0005533938 systemd[1]: Starting Netfilter Tables...
Nov 24 13:16:47 np0005533938 systemd[1]: Finished Netfilter Tables.
Nov 24 13:16:48 np0005533938 python3.9[63203]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:16:48 np0005533938 python3.9[63356]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:16:49 np0005533938 python3.9[63481]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764008208.4958873-247-68794356462798/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:16:50 np0005533938 python3.9[63634]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 13:16:50 np0005533938 systemd[1]: Reloading OpenSSH server daemon...
Nov 24 13:16:50 np0005533938 systemd[1]: Reloaded OpenSSH server daemon.
Nov 24 13:16:50 np0005533938 python3.9[63790]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:16:51 np0005533938 python3.9[63942]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:16:52 np0005533938 python3.9[64065]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008210.9469914-278-152266197092233/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:16:52 np0005533938 python3.9[64217]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 24 13:16:52 np0005533938 systemd[1]: Starting Time & Date Service...
Nov 24 13:16:53 np0005533938 systemd[1]: Started Time & Date Service.
Nov 24 13:16:53 np0005533938 python3.9[64373]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:16:54 np0005533938 python3.9[64525]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:16:54 np0005533938 python3.9[64648]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764008213.90673-313-271355312258921/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:16:55 np0005533938 python3.9[64800]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:16:55 np0005533938 python3.9[64923]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764008214.9693563-328-10373998215694/.source.yaml _original_basename=.x6g0j1le follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:16:56 np0005533938 python3.9[65075]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:16:57 np0005533938 python3.9[65198]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008216.1166244-343-206024355879376/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:16:57 np0005533938 python3.9[65350]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:16:58 np0005533938 python3.9[65503]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:16:58 np0005533938 python3[65656]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 13:16:59 np0005533938 python3.9[65808]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:17:00 np0005533938 python3.9[65931]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008219.1507802-382-50136920939963/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:17:00 np0005533938 python3.9[66083]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:17:01 np0005533938 python3.9[66206]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008220.2008538-397-246070300636422/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:17:01 np0005533938 python3.9[66358]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:17:02 np0005533938 python3.9[66481]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008221.269204-412-214075281157543/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:17:03 np0005533938 python3.9[66633]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:17:04 np0005533938 python3.9[66756]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008222.67422-427-14124389126245/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:17:05 np0005533938 python3.9[66908]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:17:05 np0005533938 python3.9[67031]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764008224.3519802-442-89376329280697/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:17:06 np0005533938 python3.9[67183]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:17:06 np0005533938 python3.9[67335]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:17:07 np0005533938 python3.9[67494]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:17:08 np0005533938 python3.9[67647]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:17:09 np0005533938 python3.9[67799]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:17:10 np0005533938 python3.9[67951]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 13:17:10 np0005533938 python3.9[68104]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 13:17:11 np0005533938 systemd[1]: session-13.scope: Deactivated successfully.
Nov 24 13:17:11 np0005533938 systemd[1]: session-13.scope: Consumed 33.906s CPU time.
Nov 24 13:17:11 np0005533938 systemd-logind[822]: Session 13 logged out. Waiting for processes to exit.
Nov 24 13:17:11 np0005533938 systemd-logind[822]: Removed session 13.
Nov 24 13:17:16 np0005533938 systemd-logind[822]: New session 14 of user zuul.
Nov 24 13:17:16 np0005533938 systemd[1]: Started Session 14 of User zuul.
Nov 24 13:17:17 np0005533938 python3.9[68285]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 24 13:17:17 np0005533938 python3.9[68437]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:17:18 np0005533938 python3.9[68589]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:17:19 np0005533938 python3.9[68741]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhS8frVtJkphIV3qjYEBaOrfFAUD1SVRr7LLCHE4Oz5qMeQHKYm90YB9nO7ntC/BIXenfYoTm6fYVn1JaiGoGSQdRBXPQG/o6WD6Ec3pD/Mcl/KMJGYuMHxaEizMQ3wOpo20hOTbEsu6v2y+3ETjeAG0UF9fWh/vCDy6bX0hMh8o7mf9skIV8gvWuCbJo4Vk92qBh7z9qccV5j5J5maU9c28+VEF1nlN0GSyYT/IRFdD7gDE7QFZ9QpapaWGSFE7nCTgz4Mw4nnJ+KaxvkxxHf4knCpDxk59+uk/+9G8oUiFokkDbJiPI6sZS+BALztR/CzJpNrAYaYmhzjbSRYb51wPj5EnXYzqgik4JzhmsqsepLD79RGK2b4ZWnQVP7WFOUL+Wm4+MkbF0LVmcy1XJeA5yhmhodU+fpO1t1SZRONc1eqep1NVqxMOHXOQgKGpIAg95Vpx9szp5NhOkzp1cQTeEhxfog0RyENmd9NxKBpu3NmtFN+dETuLT2Co1JMhM=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA2lZlyCN0FJ/jD1EDSdkabXa5aE54G6xn7+v3fPL+BD#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHFHJ7xweyewLWbij/U6h4iEFO2zmE+OAqJetXAaVahyXo6KOKB5z+dQ1ItOa9RPE9AAjyAVton3sCrkTSjqY88=#012 create=True mode=0644 path=/tmp/ansible.10_8toja state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:17:20 np0005533938 python3.9[68893]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.10_8toja' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:17:21 np0005533938 python3.9[69047]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.10_8toja state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:17:21 np0005533938 systemd[1]: session-14.scope: Deactivated successfully.
Nov 24 13:17:21 np0005533938 systemd[1]: session-14.scope: Consumed 3.360s CPU time.
Nov 24 13:17:21 np0005533938 systemd-logind[822]: Session 14 logged out. Waiting for processes to exit.
Nov 24 13:17:21 np0005533938 systemd-logind[822]: Removed session 14.
Nov 24 13:17:23 np0005533938 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 24 13:17:27 np0005533938 systemd-logind[822]: New session 15 of user zuul.
Nov 24 13:17:27 np0005533938 systemd[1]: Started Session 15 of User zuul.
Nov 24 13:17:28 np0005533938 python3.9[69228]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:17:29 np0005533938 python3.9[69384]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 24 13:17:30 np0005533938 python3.9[69538]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 13:17:31 np0005533938 python3.9[69691]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:17:32 np0005533938 python3.9[69844]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:17:32 np0005533938 python3.9[69998]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:17:33 np0005533938 python3.9[70153]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:17:33 np0005533938 systemd[1]: session-15.scope: Deactivated successfully.
Nov 24 13:17:33 np0005533938 systemd[1]: session-15.scope: Consumed 4.373s CPU time.
Nov 24 13:17:33 np0005533938 systemd-logind[822]: Session 15 logged out. Waiting for processes to exit.
Nov 24 13:17:33 np0005533938 systemd-logind[822]: Removed session 15.
Nov 24 13:17:39 np0005533938 systemd-logind[822]: New session 16 of user zuul.
Nov 24 13:17:39 np0005533938 systemd[1]: Started Session 16 of User zuul.
Nov 24 13:17:40 np0005533938 python3.9[70331]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:17:41 np0005533938 python3.9[70487]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 13:17:42 np0005533938 python3.9[70571]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 13:17:44 np0005533938 python3.9[70722]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:17:45 np0005533938 python3.9[70873]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 13:17:46 np0005533938 python3.9[71023]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:17:46 np0005533938 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 13:17:47 np0005533938 python3.9[71174]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:17:47 np0005533938 systemd[1]: session-16.scope: Deactivated successfully.
Nov 24 13:17:47 np0005533938 systemd[1]: session-16.scope: Consumed 5.840s CPU time.
Nov 24 13:17:47 np0005533938 systemd-logind[822]: Session 16 logged out. Waiting for processes to exit.
Nov 24 13:17:47 np0005533938 systemd-logind[822]: Removed session 16.
Nov 24 13:17:55 np0005533938 systemd-logind[822]: New session 17 of user zuul.
Nov 24 13:17:55 np0005533938 systemd[1]: Started Session 17 of User zuul.
Nov 24 13:18:00 np0005533938 python3[71940]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:18:02 np0005533938 python3[72035]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 24 13:18:03 np0005533938 python3[72062]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 13:18:04 np0005533938 python3[72088]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:18:04 np0005533938 kernel: loop: module loaded
Nov 24 13:18:04 np0005533938 kernel: loop3: detected capacity change from 0 to 41943040
Nov 24 13:18:04 np0005533938 python3[72122]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:18:04 np0005533938 lvm[72125]: PV /dev/loop3 not used.
Nov 24 13:18:04 np0005533938 lvm[72134]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 13:18:04 np0005533938 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 24 13:18:04 np0005533938 lvm[72136]:  1 logical volume(s) in volume group "ceph_vg0" now active
Nov 24 13:18:04 np0005533938 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 24 13:18:05 np0005533938 python3[72214]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 13:18:05 np0005533938 python3[72287]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764008284.9248202-36414-77586244552422/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:18:06 np0005533938 python3[72337]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:18:06 np0005533938 systemd[1]: Reloading.
Nov 24 13:18:06 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:18:06 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:18:06 np0005533938 systemd[1]: Starting Ceph OSD losetup...
Nov 24 13:18:06 np0005533938 bash[72378]: /dev/loop3: [64513]:4194936 (/var/lib/ceph-osd-0.img)
Nov 24 13:18:06 np0005533938 systemd[1]: Finished Ceph OSD losetup.
Nov 24 13:18:06 np0005533938 lvm[72379]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 13:18:06 np0005533938 lvm[72379]: VG ceph_vg0 finished
Nov 24 13:18:07 np0005533938 python3[72405]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 24 13:18:08 np0005533938 python3[72432]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 13:18:09 np0005533938 python3[72458]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:18:09 np0005533938 kernel: loop4: detected capacity change from 0 to 41943040
Nov 24 13:18:09 np0005533938 python3[72490]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:18:09 np0005533938 lvm[72493]: PV /dev/loop4 not used.
Nov 24 13:18:09 np0005533938 lvm[72503]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 24 13:18:09 np0005533938 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Nov 24 13:18:09 np0005533938 lvm[72505]:  1 logical volume(s) in volume group "ceph_vg1" now active
Nov 24 13:18:09 np0005533938 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Nov 24 13:18:10 np0005533938 python3[72583]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 13:18:10 np0005533938 python3[72656]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764008289.8131237-36441-112166416995885/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:18:11 np0005533938 python3[72706]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:18:11 np0005533938 systemd[1]: Reloading.
Nov 24 13:18:11 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:18:11 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:18:11 np0005533938 systemd[1]: Starting Ceph OSD losetup...
Nov 24 13:18:11 np0005533938 bash[72746]: /dev/loop4: [64513]:4328009 (/var/lib/ceph-osd-1.img)
Nov 24 13:18:11 np0005533938 systemd[1]: Finished Ceph OSD losetup.
Nov 24 13:18:11 np0005533938 lvm[72747]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 24 13:18:11 np0005533938 lvm[72747]: VG ceph_vg1 finished
Nov 24 13:18:11 np0005533938 python3[72773]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 24 13:18:13 np0005533938 python3[72800]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 13:18:13 np0005533938 python3[72826]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:18:13 np0005533938 kernel: loop5: detected capacity change from 0 to 41943040
Nov 24 13:18:13 np0005533938 python3[72858]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:18:13 np0005533938 lvm[72861]: PV /dev/loop5 not used.
Nov 24 13:18:14 np0005533938 lvm[72870]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 24 13:18:14 np0005533938 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Nov 24 13:18:14 np0005533938 lvm[72872]:  1 logical volume(s) in volume group "ceph_vg2" now active
Nov 24 13:18:14 np0005533938 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Nov 24 13:18:14 np0005533938 python3[72950]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 13:18:15 np0005533938 python3[73023]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764008294.2611268-36470-204393291903758/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:18:15 np0005533938 python3[73073]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:18:15 np0005533938 systemd[1]: Reloading.
Nov 24 13:18:15 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:18:15 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:18:15 np0005533938 systemd[1]: Starting Ceph OSD losetup...
Nov 24 13:18:15 np0005533938 bash[73114]: /dev/loop5: [64513]:4328010 (/var/lib/ceph-osd-2.img)
Nov 24 13:18:15 np0005533938 systemd[1]: Finished Ceph OSD losetup.
Nov 24 13:18:15 np0005533938 lvm[73115]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 24 13:18:15 np0005533938 lvm[73115]: VG ceph_vg2 finished
Nov 24 13:18:17 np0005533938 python3[73139]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:18:18 np0005533938 chronyd[58415]: Selected source 167.160.187.12 (pool.ntp.org)
Nov 24 13:18:20 np0005533938 python3[73232]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 24 13:18:21 np0005533938 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 13:18:21 np0005533938 systemd[1]: Starting man-db-cache-update.service...
Nov 24 13:18:22 np0005533938 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 13:18:22 np0005533938 systemd[1]: Finished man-db-cache-update.service.
Nov 24 13:18:22 np0005533938 systemd[1]: run-rda00198448e048848df8d8060e2a43ed.service: Deactivated successfully.
Nov 24 13:18:22 np0005533938 python3[73343]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 13:18:22 np0005533938 python3[73371]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:18:22 np0005533938 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 13:18:22 np0005533938 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 13:18:23 np0005533938 python3[73434]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:18:23 np0005533938 python3[73460]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:18:23 np0005533938 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 13:18:24 np0005533938 python3[73538]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 13:18:24 np0005533938 python3[73611]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764008304.0859396-36622-104594281702851/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:18:25 np0005533938 python3[73713]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 13:18:25 np0005533938 python3[73786]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764008305.2705932-36640-269736424405030/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:18:26 np0005533938 python3[73836]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 13:18:26 np0005533938 python3[73864]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 13:18:26 np0005533938 python3[73892]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 13:18:27 np0005533938 python3[73920]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:18:27 np0005533938 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 13:18:27 np0005533938 systemd-logind[822]: New session 18 of user ceph-admin.
Nov 24 13:18:27 np0005533938 systemd[1]: Created slice User Slice of UID 42477.
Nov 24 13:18:27 np0005533938 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 24 13:18:27 np0005533938 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 24 13:18:27 np0005533938 systemd[1]: Starting User Manager for UID 42477...
Nov 24 13:18:27 np0005533938 systemd[73941]: Queued start job for default target Main User Target.
Nov 24 13:18:27 np0005533938 systemd[73941]: Created slice User Application Slice.
Nov 24 13:18:27 np0005533938 systemd[73941]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 24 13:18:27 np0005533938 systemd[73941]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 13:18:27 np0005533938 systemd[73941]: Reached target Paths.
Nov 24 13:18:27 np0005533938 systemd[73941]: Reached target Timers.
Nov 24 13:18:27 np0005533938 systemd[73941]: Starting D-Bus User Message Bus Socket...
Nov 24 13:18:27 np0005533938 systemd[73941]: Starting Create User's Volatile Files and Directories...
Nov 24 13:18:27 np0005533938 systemd[73941]: Listening on D-Bus User Message Bus Socket.
Nov 24 13:18:27 np0005533938 systemd[73941]: Reached target Sockets.
Nov 24 13:18:27 np0005533938 systemd[73941]: Finished Create User's Volatile Files and Directories.
Nov 24 13:18:27 np0005533938 systemd[73941]: Reached target Basic System.
Nov 24 13:18:27 np0005533938 systemd[73941]: Reached target Main User Target.
Nov 24 13:18:27 np0005533938 systemd[73941]: Startup finished in 135ms.
Nov 24 13:18:27 np0005533938 systemd[1]: Started User Manager for UID 42477.
Nov 24 13:18:27 np0005533938 systemd[1]: Started Session 18 of User ceph-admin.
Nov 24 13:18:27 np0005533938 systemd[1]: session-18.scope: Deactivated successfully.
Nov 24 13:18:27 np0005533938 systemd-logind[822]: Session 18 logged out. Waiting for processes to exit.
Nov 24 13:18:27 np0005533938 systemd-logind[822]: Removed session 18.
Nov 24 13:18:30 np0005533938 systemd[1]: var-lib-containers-storage-overlay-compat381069562-lower\x2dmapped.mount: Deactivated successfully.
Nov 24 13:18:38 np0005533938 systemd[1]: Stopping User Manager for UID 42477...
Nov 24 13:18:38 np0005533938 systemd[73941]: Activating special unit Exit the Session...
Nov 24 13:18:38 np0005533938 systemd[73941]: Stopped target Main User Target.
Nov 24 13:18:38 np0005533938 systemd[73941]: Stopped target Basic System.
Nov 24 13:18:38 np0005533938 systemd[73941]: Stopped target Paths.
Nov 24 13:18:38 np0005533938 systemd[73941]: Stopped target Sockets.
Nov 24 13:18:38 np0005533938 systemd[73941]: Stopped target Timers.
Nov 24 13:18:38 np0005533938 systemd[73941]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 24 13:18:38 np0005533938 systemd[73941]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 24 13:18:38 np0005533938 systemd[73941]: Closed D-Bus User Message Bus Socket.
Nov 24 13:18:38 np0005533938 systemd[73941]: Stopped Create User's Volatile Files and Directories.
Nov 24 13:18:38 np0005533938 systemd[73941]: Removed slice User Application Slice.
Nov 24 13:18:38 np0005533938 systemd[73941]: Reached target Shutdown.
Nov 24 13:18:38 np0005533938 systemd[73941]: Finished Exit the Session.
Nov 24 13:18:38 np0005533938 systemd[73941]: Reached target Exit the Session.
Nov 24 13:18:38 np0005533938 systemd[1]: user@42477.service: Deactivated successfully.
Nov 24 13:18:38 np0005533938 systemd[1]: Stopped User Manager for UID 42477.
Nov 24 13:18:38 np0005533938 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 24 13:18:38 np0005533938 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 24 13:18:38 np0005533938 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 24 13:18:38 np0005533938 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 24 13:18:38 np0005533938 systemd[1]: Removed slice User Slice of UID 42477.
Nov 24 13:18:41 np0005533938 podman[73994]: 2025-11-24 18:18:41.415429495 +0000 UTC m=+13.422087290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:18:41 np0005533938 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 13:18:41 np0005533938 podman[74052]: 2025-11-24 18:18:41.491738028 +0000 UTC m=+0.047967521 container create bc3a7eb7c0e25c38492abb20e588db208319a4f338ecf0431cb4270704a6ed2e (image=quay.io/ceph/ceph:v18, name=peaceful_mcnulty, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:18:41 np0005533938 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 24 13:18:41 np0005533938 systemd[1]: Started libpod-conmon-bc3a7eb7c0e25c38492abb20e588db208319a4f338ecf0431cb4270704a6ed2e.scope.
Nov 24 13:18:41 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:18:41 np0005533938 podman[74052]: 2025-11-24 18:18:41.470575603 +0000 UTC m=+0.026805146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:18:41 np0005533938 podman[74052]: 2025-11-24 18:18:41.599383077 +0000 UTC m=+0.155612610 container init bc3a7eb7c0e25c38492abb20e588db208319a4f338ecf0431cb4270704a6ed2e (image=quay.io/ceph/ceph:v18, name=peaceful_mcnulty, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 24 13:18:41 np0005533938 podman[74052]: 2025-11-24 18:18:41.606797381 +0000 UTC m=+0.163026864 container start bc3a7eb7c0e25c38492abb20e588db208319a4f338ecf0431cb4270704a6ed2e (image=quay.io/ceph/ceph:v18, name=peaceful_mcnulty, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:18:41 np0005533938 podman[74052]: 2025-11-24 18:18:41.610233947 +0000 UTC m=+0.166463480 container attach bc3a7eb7c0e25c38492abb20e588db208319a4f338ecf0431cb4270704a6ed2e (image=quay.io/ceph/ceph:v18, name=peaceful_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 13:18:41 np0005533938 peaceful_mcnulty[74068]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 24 13:18:41 np0005533938 systemd[1]: libpod-bc3a7eb7c0e25c38492abb20e588db208319a4f338ecf0431cb4270704a6ed2e.scope: Deactivated successfully.
Nov 24 13:18:41 np0005533938 podman[74052]: 2025-11-24 18:18:41.913032607 +0000 UTC m=+0.469262090 container died bc3a7eb7c0e25c38492abb20e588db208319a4f338ecf0431cb4270704a6ed2e (image=quay.io/ceph/ceph:v18, name=peaceful_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 13:18:41 np0005533938 systemd[1]: var-lib-containers-storage-overlay-fab0c205caeb711a1e1dbe14e825a6704978e2dd659e25353db701a9c9b208df-merged.mount: Deactivated successfully.
Nov 24 13:18:41 np0005533938 podman[74052]: 2025-11-24 18:18:41.958285859 +0000 UTC m=+0.514515342 container remove bc3a7eb7c0e25c38492abb20e588db208319a4f338ecf0431cb4270704a6ed2e (image=quay.io/ceph/ceph:v18, name=peaceful_mcnulty, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 13:18:41 np0005533938 systemd[1]: libpod-conmon-bc3a7eb7c0e25c38492abb20e588db208319a4f338ecf0431cb4270704a6ed2e.scope: Deactivated successfully.
Nov 24 13:18:42 np0005533938 podman[74086]: 2025-11-24 18:18:42.012380651 +0000 UTC m=+0.035871171 container create 8e0310c76eb66d7a3438ef85b6cb93f8342ab0ff22b0afe1b93a93876bbf0dfe (image=quay.io/ceph/ceph:v18, name=xenodochial_bose, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 13:18:42 np0005533938 systemd[1]: Started libpod-conmon-8e0310c76eb66d7a3438ef85b6cb93f8342ab0ff22b0afe1b93a93876bbf0dfe.scope.
Nov 24 13:18:42 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:18:42 np0005533938 podman[74086]: 2025-11-24 18:18:42.087247398 +0000 UTC m=+0.110737938 container init 8e0310c76eb66d7a3438ef85b6cb93f8342ab0ff22b0afe1b93a93876bbf0dfe (image=quay.io/ceph/ceph:v18, name=xenodochial_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:18:42 np0005533938 podman[74086]: 2025-11-24 18:18:41.996987939 +0000 UTC m=+0.020478479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:18:42 np0005533938 podman[74086]: 2025-11-24 18:18:42.094922438 +0000 UTC m=+0.118412958 container start 8e0310c76eb66d7a3438ef85b6cb93f8342ab0ff22b0afe1b93a93876bbf0dfe (image=quay.io/ceph/ceph:v18, name=xenodochial_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 24 13:18:42 np0005533938 podman[74086]: 2025-11-24 18:18:42.097651536 +0000 UTC m=+0.121142056 container attach 8e0310c76eb66d7a3438ef85b6cb93f8342ab0ff22b0afe1b93a93876bbf0dfe (image=quay.io/ceph/ceph:v18, name=xenodochial_bose, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 13:18:42 np0005533938 xenodochial_bose[74103]: 167 167
Nov 24 13:18:42 np0005533938 systemd[1]: libpod-8e0310c76eb66d7a3438ef85b6cb93f8342ab0ff22b0afe1b93a93876bbf0dfe.scope: Deactivated successfully.
Nov 24 13:18:42 np0005533938 podman[74086]: 2025-11-24 18:18:42.100429115 +0000 UTC m=+0.123919665 container died 8e0310c76eb66d7a3438ef85b6cb93f8342ab0ff22b0afe1b93a93876bbf0dfe (image=quay.io/ceph/ceph:v18, name=xenodochial_bose, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 13:18:42 np0005533938 podman[74086]: 2025-11-24 18:18:42.137042283 +0000 UTC m=+0.160532803 container remove 8e0310c76eb66d7a3438ef85b6cb93f8342ab0ff22b0afe1b93a93876bbf0dfe (image=quay.io/ceph/ceph:v18, name=xenodochial_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:18:42 np0005533938 systemd[1]: libpod-conmon-8e0310c76eb66d7a3438ef85b6cb93f8342ab0ff22b0afe1b93a93876bbf0dfe.scope: Deactivated successfully.
Nov 24 13:18:42 np0005533938 podman[74120]: 2025-11-24 18:18:42.20344732 +0000 UTC m=+0.042106036 container create ecfec0ec0577291b15ab8ec15ee67927fa5cde0f15ff15e6a54301c23042373a (image=quay.io/ceph/ceph:v18, name=distracted_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 13:18:42 np0005533938 systemd[1]: Started libpod-conmon-ecfec0ec0577291b15ab8ec15ee67927fa5cde0f15ff15e6a54301c23042373a.scope.
Nov 24 13:18:42 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:18:42 np0005533938 podman[74120]: 2025-11-24 18:18:42.184991842 +0000 UTC m=+0.023650578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:18:42 np0005533938 podman[74120]: 2025-11-24 18:18:42.286688994 +0000 UTC m=+0.125347780 container init ecfec0ec0577291b15ab8ec15ee67927fa5cde0f15ff15e6a54301c23042373a (image=quay.io/ceph/ceph:v18, name=distracted_buck, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:18:42 np0005533938 podman[74120]: 2025-11-24 18:18:42.29296974 +0000 UTC m=+0.131628486 container start ecfec0ec0577291b15ab8ec15ee67927fa5cde0f15ff15e6a54301c23042373a (image=quay.io/ceph/ceph:v18, name=distracted_buck, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 13:18:42 np0005533938 podman[74120]: 2025-11-24 18:18:42.297338689 +0000 UTC m=+0.135997505 container attach ecfec0ec0577291b15ab8ec15ee67927fa5cde0f15ff15e6a54301c23042373a (image=quay.io/ceph/ceph:v18, name=distracted_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 13:18:42 np0005533938 distracted_buck[74136]: AQCCoSRpI0CEEhAAh1AsvDj3Tier0fuH8CTJ+A==
Nov 24 13:18:42 np0005533938 systemd[1]: libpod-ecfec0ec0577291b15ab8ec15ee67927fa5cde0f15ff15e6a54301c23042373a.scope: Deactivated successfully.
Nov 24 13:18:42 np0005533938 podman[74120]: 2025-11-24 18:18:42.314560396 +0000 UTC m=+0.153219142 container died ecfec0ec0577291b15ab8ec15ee67927fa5cde0f15ff15e6a54301c23042373a (image=quay.io/ceph/ceph:v18, name=distracted_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:18:42 np0005533938 podman[74120]: 2025-11-24 18:18:42.354611109 +0000 UTC m=+0.193269825 container remove ecfec0ec0577291b15ab8ec15ee67927fa5cde0f15ff15e6a54301c23042373a (image=quay.io/ceph/ceph:v18, name=distracted_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:18:42 np0005533938 systemd[1]: libpod-conmon-ecfec0ec0577291b15ab8ec15ee67927fa5cde0f15ff15e6a54301c23042373a.scope: Deactivated successfully.
Nov 24 13:18:42 np0005533938 podman[74154]: 2025-11-24 18:18:42.416075414 +0000 UTC m=+0.041086890 container create 9463b69d928537a4d3154c60f8db51c36c31ee558701a97aedd431a412b6689e (image=quay.io/ceph/ceph:v18, name=festive_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:18:42 np0005533938 systemd[1]: Started libpod-conmon-9463b69d928537a4d3154c60f8db51c36c31ee558701a97aedd431a412b6689e.scope.
Nov 24 13:18:42 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:18:42 np0005533938 podman[74154]: 2025-11-24 18:18:42.476934513 +0000 UTC m=+0.101946009 container init 9463b69d928537a4d3154c60f8db51c36c31ee558701a97aedd431a412b6689e (image=quay.io/ceph/ceph:v18, name=festive_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 13:18:42 np0005533938 podman[74154]: 2025-11-24 18:18:42.481483006 +0000 UTC m=+0.106494482 container start 9463b69d928537a4d3154c60f8db51c36c31ee558701a97aedd431a412b6689e (image=quay.io/ceph/ceph:v18, name=festive_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:18:42 np0005533938 podman[74154]: 2025-11-24 18:18:42.48446348 +0000 UTC m=+0.109474956 container attach 9463b69d928537a4d3154c60f8db51c36c31ee558701a97aedd431a412b6689e (image=quay.io/ceph/ceph:v18, name=festive_bhabha, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 13:18:42 np0005533938 podman[74154]: 2025-11-24 18:18:42.396068507 +0000 UTC m=+0.021080043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:18:42 np0005533938 festive_bhabha[74169]: AQCCoSRpe+LQHRAABOTr2MY5IzGxlo1L2CWrEw==
Nov 24 13:18:42 np0005533938 systemd[1]: libpod-9463b69d928537a4d3154c60f8db51c36c31ee558701a97aedd431a412b6689e.scope: Deactivated successfully.
Nov 24 13:18:42 np0005533938 podman[74154]: 2025-11-24 18:18:42.503563933 +0000 UTC m=+0.128575429 container died 9463b69d928537a4d3154c60f8db51c36c31ee558701a97aedd431a412b6689e (image=quay.io/ceph/ceph:v18, name=festive_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 13:18:42 np0005533938 systemd[1]: var-lib-containers-storage-overlay-1bf4a24ee8880e81dd511bccbe4675ee5fb29b9f69803619a61a77033cb22ace-merged.mount: Deactivated successfully.
Nov 24 13:18:42 np0005533938 podman[74154]: 2025-11-24 18:18:42.536543841 +0000 UTC m=+0.161555317 container remove 9463b69d928537a4d3154c60f8db51c36c31ee558701a97aedd431a412b6689e (image=quay.io/ceph/ceph:v18, name=festive_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 13:18:42 np0005533938 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 13:18:42 np0005533938 systemd[1]: libpod-conmon-9463b69d928537a4d3154c60f8db51c36c31ee558701a97aedd431a412b6689e.scope: Deactivated successfully.
Nov 24 13:18:42 np0005533938 podman[74188]: 2025-11-24 18:18:42.590307215 +0000 UTC m=+0.034522217 container create 21fd72d6bb588a6c7c099f1ce28ee58b807adda6fd3fc4efaa702a9e4a0b0f08 (image=quay.io/ceph/ceph:v18, name=optimistic_cray, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:18:42 np0005533938 systemd[1]: Started libpod-conmon-21fd72d6bb588a6c7c099f1ce28ee58b807adda6fd3fc4efaa702a9e4a0b0f08.scope.
Nov 24 13:18:42 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:18:42 np0005533938 podman[74188]: 2025-11-24 18:18:42.57517663 +0000 UTC m=+0.019391652 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:18:42 np0005533938 podman[74188]: 2025-11-24 18:18:42.841166967 +0000 UTC m=+0.285381989 container init 21fd72d6bb588a6c7c099f1ce28ee58b807adda6fd3fc4efaa702a9e4a0b0f08 (image=quay.io/ceph/ceph:v18, name=optimistic_cray, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:18:42 np0005533938 podman[74188]: 2025-11-24 18:18:42.846037908 +0000 UTC m=+0.290252910 container start 21fd72d6bb588a6c7c099f1ce28ee58b807adda6fd3fc4efaa702a9e4a0b0f08 (image=quay.io/ceph/ceph:v18, name=optimistic_cray, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:18:42 np0005533938 podman[74188]: 2025-11-24 18:18:42.849221457 +0000 UTC m=+0.293436529 container attach 21fd72d6bb588a6c7c099f1ce28ee58b807adda6fd3fc4efaa702a9e4a0b0f08 (image=quay.io/ceph/ceph:v18, name=optimistic_cray, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 13:18:42 np0005533938 optimistic_cray[74204]: AQCCoSRpGfBtMxAAbOkOo3GiHZcm0uhcwe9P5g==
Nov 24 13:18:42 np0005533938 systemd[1]: libpod-21fd72d6bb588a6c7c099f1ce28ee58b807adda6fd3fc4efaa702a9e4a0b0f08.scope: Deactivated successfully.
Nov 24 13:18:42 np0005533938 podman[74188]: 2025-11-24 18:18:42.86588314 +0000 UTC m=+0.310098142 container died 21fd72d6bb588a6c7c099f1ce28ee58b807adda6fd3fc4efaa702a9e4a0b0f08 (image=quay.io/ceph/ceph:v18, name=optimistic_cray, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:18:42 np0005533938 podman[74188]: 2025-11-24 18:18:42.894552191 +0000 UTC m=+0.338767213 container remove 21fd72d6bb588a6c7c099f1ce28ee58b807adda6fd3fc4efaa702a9e4a0b0f08 (image=quay.io/ceph/ceph:v18, name=optimistic_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 13:18:42 np0005533938 systemd[1]: libpod-conmon-21fd72d6bb588a6c7c099f1ce28ee58b807adda6fd3fc4efaa702a9e4a0b0f08.scope: Deactivated successfully.
Nov 24 13:18:42 np0005533938 podman[74222]: 2025-11-24 18:18:42.961644425 +0000 UTC m=+0.044327680 container create 5a08f657c75c42814c29ea1eb0adaef9c9232a3b4ae38f721ed1ceecc2ed0a8c (image=quay.io/ceph/ceph:v18, name=suspicious_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 13:18:42 np0005533938 systemd[1]: Started libpod-conmon-5a08f657c75c42814c29ea1eb0adaef9c9232a3b4ae38f721ed1ceecc2ed0a8c.scope.
Nov 24 13:18:43 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:18:43 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/306b1de28570fa31e9274d5605e7110e2473cfc3b86d3bee9b8a033b1942d15e/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:43 np0005533938 podman[74222]: 2025-11-24 18:18:43.028506833 +0000 UTC m=+0.111190178 container init 5a08f657c75c42814c29ea1eb0adaef9c9232a3b4ae38f721ed1ceecc2ed0a8c (image=quay.io/ceph/ceph:v18, name=suspicious_torvalds, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 13:18:43 np0005533938 podman[74222]: 2025-11-24 18:18:43.032873592 +0000 UTC m=+0.115556847 container start 5a08f657c75c42814c29ea1eb0adaef9c9232a3b4ae38f721ed1ceecc2ed0a8c (image=quay.io/ceph/ceph:v18, name=suspicious_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 13:18:43 np0005533938 podman[74222]: 2025-11-24 18:18:43.035829785 +0000 UTC m=+0.118513140 container attach 5a08f657c75c42814c29ea1eb0adaef9c9232a3b4ae38f721ed1ceecc2ed0a8c (image=quay.io/ceph/ceph:v18, name=suspicious_torvalds, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 13:18:43 np0005533938 podman[74222]: 2025-11-24 18:18:42.942670044 +0000 UTC m=+0.025353349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:18:43 np0005533938 suspicious_torvalds[74239]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 24 13:18:43 np0005533938 suspicious_torvalds[74239]: setting min_mon_release = pacific
Nov 24 13:18:43 np0005533938 suspicious_torvalds[74239]: /usr/bin/monmaptool: set fsid to e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 13:18:43 np0005533938 suspicious_torvalds[74239]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 24 13:18:43 np0005533938 systemd[1]: libpod-5a08f657c75c42814c29ea1eb0adaef9c9232a3b4ae38f721ed1ceecc2ed0a8c.scope: Deactivated successfully.
Nov 24 13:18:43 np0005533938 podman[74222]: 2025-11-24 18:18:43.061460881 +0000 UTC m=+0.144144196 container died 5a08f657c75c42814c29ea1eb0adaef9c9232a3b4ae38f721ed1ceecc2ed0a8c (image=quay.io/ceph/ceph:v18, name=suspicious_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 13:18:43 np0005533938 podman[74222]: 2025-11-24 18:18:43.099009502 +0000 UTC m=+0.181692797 container remove 5a08f657c75c42814c29ea1eb0adaef9c9232a3b4ae38f721ed1ceecc2ed0a8c (image=quay.io/ceph/ceph:v18, name=suspicious_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:18:43 np0005533938 systemd[1]: libpod-conmon-5a08f657c75c42814c29ea1eb0adaef9c9232a3b4ae38f721ed1ceecc2ed0a8c.scope: Deactivated successfully.
Nov 24 13:18:43 np0005533938 podman[74260]: 2025-11-24 18:18:43.160094957 +0000 UTC m=+0.041712115 container create 96974f0d4009e7bb5c463af2c3243d7f4f68c08808d6d55e28b283afe26b17b3 (image=quay.io/ceph/ceph:v18, name=mystifying_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 13:18:43 np0005533938 systemd[1]: Started libpod-conmon-96974f0d4009e7bb5c463af2c3243d7f4f68c08808d6d55e28b283afe26b17b3.scope.
Nov 24 13:18:43 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:18:43 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dfb2e3728ab9376905b152fe8f978b06a92dd32436562e35efa6e2cb7d3a901/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:43 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dfb2e3728ab9376905b152fe8f978b06a92dd32436562e35efa6e2cb7d3a901/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:43 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dfb2e3728ab9376905b152fe8f978b06a92dd32436562e35efa6e2cb7d3a901/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:43 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dfb2e3728ab9376905b152fe8f978b06a92dd32436562e35efa6e2cb7d3a901/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:43 np0005533938 podman[74260]: 2025-11-24 18:18:43.217819609 +0000 UTC m=+0.099436787 container init 96974f0d4009e7bb5c463af2c3243d7f4f68c08808d6d55e28b283afe26b17b3 (image=quay.io/ceph/ceph:v18, name=mystifying_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 24 13:18:43 np0005533938 podman[74260]: 2025-11-24 18:18:43.225015967 +0000 UTC m=+0.106633115 container start 96974f0d4009e7bb5c463af2c3243d7f4f68c08808d6d55e28b283afe26b17b3 (image=quay.io/ceph/ceph:v18, name=mystifying_chebyshev, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:18:43 np0005533938 podman[74260]: 2025-11-24 18:18:43.228499434 +0000 UTC m=+0.110116602 container attach 96974f0d4009e7bb5c463af2c3243d7f4f68c08808d6d55e28b283afe26b17b3 (image=quay.io/ceph/ceph:v18, name=mystifying_chebyshev, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 13:18:43 np0005533938 podman[74260]: 2025-11-24 18:18:43.143385453 +0000 UTC m=+0.025002631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:18:43 np0005533938 systemd[1]: libpod-96974f0d4009e7bb5c463af2c3243d7f4f68c08808d6d55e28b283afe26b17b3.scope: Deactivated successfully.
Nov 24 13:18:43 np0005533938 podman[74260]: 2025-11-24 18:18:43.320055155 +0000 UTC m=+0.201672303 container died 96974f0d4009e7bb5c463af2c3243d7f4f68c08808d6d55e28b283afe26b17b3 (image=quay.io/ceph/ceph:v18, name=mystifying_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 13:18:43 np0005533938 podman[74260]: 2025-11-24 18:18:43.356030157 +0000 UTC m=+0.237647315 container remove 96974f0d4009e7bb5c463af2c3243d7f4f68c08808d6d55e28b283afe26b17b3 (image=quay.io/ceph/ceph:v18, name=mystifying_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:18:43 np0005533938 systemd[1]: libpod-conmon-96974f0d4009e7bb5c463af2c3243d7f4f68c08808d6d55e28b283afe26b17b3.scope: Deactivated successfully.
Nov 24 13:18:43 np0005533938 systemd[1]: Reloading.
Nov 24 13:18:43 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:18:43 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:18:43 np0005533938 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 13:18:43 np0005533938 systemd[1]: var-lib-containers-storage-overlay-8a5fbd4a643ec84a1b384ecb85e67ff50286b8c6d3304293776a1918d8cdeba3-merged.mount: Deactivated successfully.
Nov 24 13:18:43 np0005533938 systemd[1]: Reloading.
Nov 24 13:18:43 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:18:43 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:18:43 np0005533938 systemd[1]: Reached target All Ceph clusters and services.
Nov 24 13:18:43 np0005533938 systemd[1]: Reloading.
Nov 24 13:18:43 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:18:43 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:18:44 np0005533938 systemd[1]: Reached target Ceph cluster e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 13:18:44 np0005533938 systemd[1]: Reloading.
Nov 24 13:18:44 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:18:44 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:18:44 np0005533938 systemd[1]: Reloading.
Nov 24 13:18:44 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:18:44 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:18:44 np0005533938 systemd[1]: Created slice Slice /system/ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 13:18:44 np0005533938 systemd[1]: Reached target System Time Set.
Nov 24 13:18:44 np0005533938 systemd[1]: Reached target System Time Synchronized.
Nov 24 13:18:44 np0005533938 systemd[1]: Starting Ceph mon.compute-0 for e5ee928f-099b-569b-93c9-ecf025cbb50d...
Nov 24 13:18:44 np0005533938 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 13:18:44 np0005533938 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 13:18:44 np0005533938 podman[74552]: 2025-11-24 18:18:44.825776479 +0000 UTC m=+0.034930717 container create 5efd0838c252a6726be084a5a3e77f2b53c37f06f40551e6853ca14688755acf (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:18:44 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8a18acab957d2d39d4f0ba9f95371e45b097f9aedc3914bbea3902be2f8e52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:44 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8a18acab957d2d39d4f0ba9f95371e45b097f9aedc3914bbea3902be2f8e52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:44 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8a18acab957d2d39d4f0ba9f95371e45b097f9aedc3914bbea3902be2f8e52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:44 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8a18acab957d2d39d4f0ba9f95371e45b097f9aedc3914bbea3902be2f8e52/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:44 np0005533938 podman[74552]: 2025-11-24 18:18:44.882482976 +0000 UTC m=+0.091637224 container init 5efd0838c252a6726be084a5a3e77f2b53c37f06f40551e6853ca14688755acf (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:18:44 np0005533938 podman[74552]: 2025-11-24 18:18:44.888012713 +0000 UTC m=+0.097166971 container start 5efd0838c252a6726be084a5a3e77f2b53c37f06f40551e6853ca14688755acf (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:18:44 np0005533938 bash[74552]: 5efd0838c252a6726be084a5a3e77f2b53c37f06f40551e6853ca14688755acf
Nov 24 13:18:44 np0005533938 podman[74552]: 2025-11-24 18:18:44.810694495 +0000 UTC m=+0.019848783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:18:44 np0005533938 systemd[1]: Started Ceph mon.compute-0 for e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: pidfile_write: ignore empty --pid-file
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: load: jerasure load: lrc 
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: RocksDB version: 7.9.2
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: Git sha 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: DB SUMMARY
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: DB Session ID:  ABEEGKT7BPHIYELDG0VH
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: CURRENT file:  CURRENT
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                         Options.error_if_exists: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                       Options.create_if_missing: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                                     Options.env: 0x55a9e4d0fc40
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                                Options.info_log: 0x55a9e6d80e80
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                              Options.statistics: (nil)
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                               Options.use_fsync: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                              Options.db_log_dir: 
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                                 Options.wal_dir: 
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                    Options.write_buffer_manager: 0x55a9e6d90b40
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                  Options.unordered_write: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                               Options.row_cache: None
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                              Options.wal_filter: None
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:             Options.two_write_queues: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:             Options.wal_compression: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:             Options.atomic_flush: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:             Options.max_background_jobs: 2
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:             Options.max_background_compactions: -1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:             Options.max_subcompactions: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:             Options.max_total_wal_size: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                          Options.max_open_files: -1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:       Options.compaction_readahead_size: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: Compression algorithms supported:
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: #011kZSTD supported: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: #011kXpressCompression supported: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: #011kBZip2Compression supported: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: #011kLZ4Compression supported: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: #011kZlibCompression supported: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: #011kSnappyCompression supported: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:           Options.merge_operator: 
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:        Options.compaction_filter: None
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a9e6d80a80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a9e6d791f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:        Options.write_buffer_size: 33554432
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:  Options.max_write_buffer_number: 2
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:          Options.compression: NoCompression
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:             Options.num_levels: 7
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5bcbf129-cc59-4441-a37f-051fd374ef44
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008324943082, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008324944883, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "ABEEGKT7BPHIYELDG0VH", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008324945080, "job": 1, "event": "recovery_finished"}
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a9e6da2e00
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: DB pointer 0x55a9e6eac000
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.18 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.18 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a9e6d791f0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)#012#012** File Read Latency Histogram By Level [default] **
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@-1(???) e0 preinit fsid e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-11-24T18:18:43.270587Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader).mds e1 new map
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: log_channel(cluster) log [DBG] : fsmap 
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mkfs e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 13:18:44 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 24 13:18:44 np0005533938 podman[74573]: 2025-11-24 18:18:44.996129145 +0000 UTC m=+0.064064550 container create 4ac3bce430c15daa950c6a4736be8839497f3fee723fbe2fe1c7b970806ee286 (image=quay.io/ceph/ceph:v18, name=inspiring_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 13:18:45 np0005533938 ceph-mon[74572]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 24 13:18:45 np0005533938 ceph-mon[74572]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 24 13:18:45 np0005533938 ceph-mon[74572]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 24 13:18:45 np0005533938 systemd[1]: Started libpod-conmon-4ac3bce430c15daa950c6a4736be8839497f3fee723fbe2fe1c7b970806ee286.scope.
Nov 24 13:18:45 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:18:45 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24cd9d752b0c7f2861011ea3d3db6848ec9c127eb06362d116c403de80da6b74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:45 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24cd9d752b0c7f2861011ea3d3db6848ec9c127eb06362d116c403de80da6b74/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:45 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24cd9d752b0c7f2861011ea3d3db6848ec9c127eb06362d116c403de80da6b74/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:45 np0005533938 podman[74573]: 2025-11-24 18:18:44.972021247 +0000 UTC m=+0.039956692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:18:45 np0005533938 podman[74573]: 2025-11-24 18:18:45.077640506 +0000 UTC m=+0.145575931 container init 4ac3bce430c15daa950c6a4736be8839497f3fee723fbe2fe1c7b970806ee286 (image=quay.io/ceph/ceph:v18, name=inspiring_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 13:18:45 np0005533938 podman[74573]: 2025-11-24 18:18:45.084062926 +0000 UTC m=+0.151998331 container start 4ac3bce430c15daa950c6a4736be8839497f3fee723fbe2fe1c7b970806ee286 (image=quay.io/ceph/ceph:v18, name=inspiring_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 13:18:45 np0005533938 podman[74573]: 2025-11-24 18:18:45.086872865 +0000 UTC m=+0.154808310 container attach 4ac3bce430c15daa950c6a4736be8839497f3fee723fbe2fe1c7b970806ee286 (image=quay.io/ceph/ceph:v18, name=inspiring_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:18:45 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 24 13:18:45 np0005533938 ceph-mon[74572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1437311656' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 13:18:45 np0005533938 inspiring_mccarthy[74628]:  cluster:
Nov 24 13:18:45 np0005533938 inspiring_mccarthy[74628]:    id:     e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 13:18:45 np0005533938 inspiring_mccarthy[74628]:    health: HEALTH_OK
Nov 24 13:18:45 np0005533938 inspiring_mccarthy[74628]: 
Nov 24 13:18:45 np0005533938 inspiring_mccarthy[74628]:  services:
Nov 24 13:18:45 np0005533938 inspiring_mccarthy[74628]:    mon: 1 daemons, quorum compute-0 (age 0.484226s)
Nov 24 13:18:45 np0005533938 inspiring_mccarthy[74628]:    mgr: no daemons active
Nov 24 13:18:45 np0005533938 inspiring_mccarthy[74628]:    osd: 0 osds: 0 up, 0 in
Nov 24 13:18:45 np0005533938 inspiring_mccarthy[74628]: 
Nov 24 13:18:45 np0005533938 inspiring_mccarthy[74628]:  data:
Nov 24 13:18:45 np0005533938 inspiring_mccarthy[74628]:    pools:   0 pools, 0 pgs
Nov 24 13:18:45 np0005533938 inspiring_mccarthy[74628]:    objects: 0 objects, 0 B
Nov 24 13:18:45 np0005533938 inspiring_mccarthy[74628]:    usage:   0 B used, 0 B / 0 B avail
Nov 24 13:18:45 np0005533938 inspiring_mccarthy[74628]:    pgs:     
Nov 24 13:18:45 np0005533938 inspiring_mccarthy[74628]: 
Nov 24 13:18:45 np0005533938 systemd[1]: libpod-4ac3bce430c15daa950c6a4736be8839497f3fee723fbe2fe1c7b970806ee286.scope: Deactivated successfully.
Nov 24 13:18:45 np0005533938 podman[74573]: 2025-11-24 18:18:45.477005302 +0000 UTC m=+0.544940747 container died 4ac3bce430c15daa950c6a4736be8839497f3fee723fbe2fe1c7b970806ee286 (image=quay.io/ceph/ceph:v18, name=inspiring_mccarthy, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 13:18:45 np0005533938 podman[74573]: 2025-11-24 18:18:45.514565833 +0000 UTC m=+0.582501238 container remove 4ac3bce430c15daa950c6a4736be8839497f3fee723fbe2fe1c7b970806ee286 (image=quay.io/ceph/ceph:v18, name=inspiring_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:18:45 np0005533938 systemd[1]: libpod-conmon-4ac3bce430c15daa950c6a4736be8839497f3fee723fbe2fe1c7b970806ee286.scope: Deactivated successfully.
Nov 24 13:18:45 np0005533938 podman[74666]: 2025-11-24 18:18:45.575422823 +0000 UTC m=+0.040933337 container create 1e06199fa9be875966b42d4fe09291389411808aa31b33a955a35ab540f5a240 (image=quay.io/ceph/ceph:v18, name=ecstatic_chaum, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 24 13:18:45 np0005533938 systemd[1]: Started libpod-conmon-1e06199fa9be875966b42d4fe09291389411808aa31b33a955a35ab540f5a240.scope.
Nov 24 13:18:45 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:18:45 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6bb7f0ef9ca30d1e124faf17699289a0c08b8cef7a7bad29b8572ddce1bea50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:45 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6bb7f0ef9ca30d1e124faf17699289a0c08b8cef7a7bad29b8572ddce1bea50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:45 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6bb7f0ef9ca30d1e124faf17699289a0c08b8cef7a7bad29b8572ddce1bea50/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:45 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6bb7f0ef9ca30d1e124faf17699289a0c08b8cef7a7bad29b8572ddce1bea50/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:45 np0005533938 podman[74666]: 2025-11-24 18:18:45.555630982 +0000 UTC m=+0.021141506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:18:45 np0005533938 podman[74666]: 2025-11-24 18:18:45.658211146 +0000 UTC m=+0.123721740 container init 1e06199fa9be875966b42d4fe09291389411808aa31b33a955a35ab540f5a240 (image=quay.io/ceph/ceph:v18, name=ecstatic_chaum, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 13:18:45 np0005533938 podman[74666]: 2025-11-24 18:18:45.665019005 +0000 UTC m=+0.130529519 container start 1e06199fa9be875966b42d4fe09291389411808aa31b33a955a35ab540f5a240 (image=quay.io/ceph/ceph:v18, name=ecstatic_chaum, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 13:18:45 np0005533938 podman[74666]: 2025-11-24 18:18:45.668174033 +0000 UTC m=+0.133684587 container attach 1e06199fa9be875966b42d4fe09291389411808aa31b33a955a35ab540f5a240 (image=quay.io/ceph/ceph:v18, name=ecstatic_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:18:46 np0005533938 ceph-mon[74572]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 24 13:18:46 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 24 13:18:46 np0005533938 ceph-mon[74572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2974578884' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 24 13:18:46 np0005533938 ceph-mon[74572]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2974578884' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 24 13:18:46 np0005533938 ecstatic_chaum[74682]: 
Nov 24 13:18:46 np0005533938 ecstatic_chaum[74682]: [global]
Nov 24 13:18:46 np0005533938 ecstatic_chaum[74682]: #011fsid = e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 13:18:46 np0005533938 ecstatic_chaum[74682]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 24 13:18:46 np0005533938 ecstatic_chaum[74682]: #011osd_crush_chooseleaf_type = 0
Nov 24 13:18:46 np0005533938 systemd[1]: libpod-1e06199fa9be875966b42d4fe09291389411808aa31b33a955a35ab540f5a240.scope: Deactivated successfully.
Nov 24 13:18:46 np0005533938 podman[74666]: 2025-11-24 18:18:46.075405153 +0000 UTC m=+0.540915677 container died 1e06199fa9be875966b42d4fe09291389411808aa31b33a955a35ab540f5a240 (image=quay.io/ceph/ceph:v18, name=ecstatic_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:18:46 np0005533938 systemd[1]: var-lib-containers-storage-overlay-e6bb7f0ef9ca30d1e124faf17699289a0c08b8cef7a7bad29b8572ddce1bea50-merged.mount: Deactivated successfully.
Nov 24 13:18:46 np0005533938 podman[74666]: 2025-11-24 18:18:46.1155705 +0000 UTC m=+0.581081014 container remove 1e06199fa9be875966b42d4fe09291389411808aa31b33a955a35ab540f5a240 (image=quay.io/ceph/ceph:v18, name=ecstatic_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 13:18:46 np0005533938 systemd[1]: libpod-conmon-1e06199fa9be875966b42d4fe09291389411808aa31b33a955a35ab540f5a240.scope: Deactivated successfully.
Nov 24 13:18:46 np0005533938 podman[74720]: 2025-11-24 18:18:46.1704036 +0000 UTC m=+0.036669391 container create 1bf98407d6e75106abac6f86aeb21929728513d9181c8be15643a5cb08832137 (image=quay.io/ceph/ceph:v18, name=hungry_pare, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:18:46 np0005533938 systemd[1]: Started libpod-conmon-1bf98407d6e75106abac6f86aeb21929728513d9181c8be15643a5cb08832137.scope.
Nov 24 13:18:46 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:18:46 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd0082c546c5993355ec683f17088cba3f0c719878a5a6a7b9a3bb877b60994/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:46 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd0082c546c5993355ec683f17088cba3f0c719878a5a6a7b9a3bb877b60994/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:46 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd0082c546c5993355ec683f17088cba3f0c719878a5a6a7b9a3bb877b60994/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:46 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd0082c546c5993355ec683f17088cba3f0c719878a5a6a7b9a3bb877b60994/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:46 np0005533938 podman[74720]: 2025-11-24 18:18:46.225151808 +0000 UTC m=+0.091417599 container init 1bf98407d6e75106abac6f86aeb21929728513d9181c8be15643a5cb08832137 (image=quay.io/ceph/ceph:v18, name=hungry_pare, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:18:46 np0005533938 podman[74720]: 2025-11-24 18:18:46.23048936 +0000 UTC m=+0.096755151 container start 1bf98407d6e75106abac6f86aeb21929728513d9181c8be15643a5cb08832137 (image=quay.io/ceph/ceph:v18, name=hungry_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 13:18:46 np0005533938 podman[74720]: 2025-11-24 18:18:46.233358311 +0000 UTC m=+0.099624122 container attach 1bf98407d6e75106abac6f86aeb21929728513d9181c8be15643a5cb08832137 (image=quay.io/ceph/ceph:v18, name=hungry_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 13:18:46 np0005533938 podman[74720]: 2025-11-24 18:18:46.154636889 +0000 UTC m=+0.020902700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:18:46 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:18:46 np0005533938 ceph-mon[74572]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4154862740' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:18:46 np0005533938 systemd[1]: libpod-1bf98407d6e75106abac6f86aeb21929728513d9181c8be15643a5cb08832137.scope: Deactivated successfully.
Nov 24 13:18:46 np0005533938 podman[74720]: 2025-11-24 18:18:46.603094691 +0000 UTC m=+0.469360482 container died 1bf98407d6e75106abac6f86aeb21929728513d9181c8be15643a5cb08832137 (image=quay.io/ceph/ceph:v18, name=hungry_pare, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 13:18:46 np0005533938 systemd[1]: var-lib-containers-storage-overlay-7dd0082c546c5993355ec683f17088cba3f0c719878a5a6a7b9a3bb877b60994-merged.mount: Deactivated successfully.
Nov 24 13:18:46 np0005533938 podman[74720]: 2025-11-24 18:18:46.642188071 +0000 UTC m=+0.508453862 container remove 1bf98407d6e75106abac6f86aeb21929728513d9181c8be15643a5cb08832137 (image=quay.io/ceph/ceph:v18, name=hungry_pare, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 24 13:18:46 np0005533938 systemd[1]: libpod-conmon-1bf98407d6e75106abac6f86aeb21929728513d9181c8be15643a5cb08832137.scope: Deactivated successfully.
Nov 24 13:18:46 np0005533938 systemd[1]: Stopping Ceph mon.compute-0 for e5ee928f-099b-569b-93c9-ecf025cbb50d...
Nov 24 13:18:46 np0005533938 ceph-mon[74572]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 24 13:18:46 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 24 13:18:46 np0005533938 ceph-mon[74572]: mon.compute-0@0(leader) e1 shutdown
Nov 24 13:18:46 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0[74568]: 2025-11-24T18:18:46.802+0000 7fed6ffcf640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 24 13:18:46 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0[74568]: 2025-11-24T18:18:46.802+0000 7fed6ffcf640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 24 13:18:46 np0005533938 ceph-mon[74572]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 24 13:18:46 np0005533938 ceph-mon[74572]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 24 13:18:47 np0005533938 podman[74807]: 2025-11-24 18:18:47.016328011 +0000 UTC m=+0.240509347 container died 5efd0838c252a6726be084a5a3e77f2b53c37f06f40551e6853ca14688755acf (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 13:18:47 np0005533938 systemd[1]: var-lib-containers-storage-overlay-dc8a18acab957d2d39d4f0ba9f95371e45b097f9aedc3914bbea3902be2f8e52-merged.mount: Deactivated successfully.
Nov 24 13:18:47 np0005533938 podman[74807]: 2025-11-24 18:18:47.049951785 +0000 UTC m=+0.274133121 container remove 5efd0838c252a6726be084a5a3e77f2b53c37f06f40551e6853ca14688755acf (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:18:47 np0005533938 bash[74807]: ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0
Nov 24 13:18:47 np0005533938 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 13:18:47 np0005533938 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 24 13:18:47 np0005533938 systemd[1]: ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d@mon.compute-0.service: Deactivated successfully.
Nov 24 13:18:47 np0005533938 systemd[1]: Stopped Ceph mon.compute-0 for e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 13:18:47 np0005533938 systemd[1]: Starting Ceph mon.compute-0 for e5ee928f-099b-569b-93c9-ecf025cbb50d...
Nov 24 13:18:47 np0005533938 podman[74908]: 2025-11-24 18:18:47.343813642 +0000 UTC m=+0.035253015 container create 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:18:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7adb674b6d57fd032ad5bb9b8bc1f2f5a488878b6697c450e0fd8b76abe39601/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7adb674b6d57fd032ad5bb9b8bc1f2f5a488878b6697c450e0fd8b76abe39601/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7adb674b6d57fd032ad5bb9b8bc1f2f5a488878b6697c450e0fd8b76abe39601/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7adb674b6d57fd032ad5bb9b8bc1f2f5a488878b6697c450e0fd8b76abe39601/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:47 np0005533938 podman[74908]: 2025-11-24 18:18:47.388394298 +0000 UTC m=+0.079833691 container init 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 13:18:47 np0005533938 podman[74908]: 2025-11-24 18:18:47.393487834 +0000 UTC m=+0.084927207 container start 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 13:18:47 np0005533938 bash[74908]: 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d
Nov 24 13:18:47 np0005533938 podman[74908]: 2025-11-24 18:18:47.328760839 +0000 UTC m=+0.020200232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:18:47 np0005533938 systemd[1]: Started Ceph mon.compute-0 for e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: pidfile_write: ignore empty --pid-file
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: load: jerasure load: lrc 
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: RocksDB version: 7.9.2
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: Git sha 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: DB SUMMARY
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: DB Session ID:  WW3CBZDUF00LP3K0CKDH
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: CURRENT file:  CURRENT
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 52078 ; 
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                         Options.error_if_exists: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                       Options.create_if_missing: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                                     Options.env: 0x562aef2cfc40
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                                Options.info_log: 0x562af0d05040
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                              Options.statistics: (nil)
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                               Options.use_fsync: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                              Options.db_log_dir: 
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                                 Options.wal_dir: 
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                    Options.write_buffer_manager: 0x562af0d14b40
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                  Options.unordered_write: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                               Options.row_cache: None
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                              Options.wal_filter: None
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:             Options.two_write_queues: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:             Options.wal_compression: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:             Options.atomic_flush: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:             Options.max_background_jobs: 2
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:             Options.max_background_compactions: -1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:             Options.max_subcompactions: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:             Options.max_total_wal_size: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                          Options.max_open_files: -1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:       Options.compaction_readahead_size: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: Compression algorithms supported:
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: #011kZSTD supported: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: #011kXpressCompression supported: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: #011kBZip2Compression supported: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: #011kLZ4Compression supported: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: #011kZlibCompression supported: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: #011kSnappyCompression supported: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:           Options.merge_operator: 
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:        Options.compaction_filter: None
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562af0d04c40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562af0cfd1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:        Options.write_buffer_size: 33554432
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:  Options.max_write_buffer_number: 2
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:          Options.compression: NoCompression
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:             Options.num_levels: 7
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5bcbf129-cc59-4441-a37f-051fd374ef44
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008327427704, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008327429768, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 51794, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 129, "table_properties": {"data_size": 50351, "index_size": 149, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2940, "raw_average_key_size": 30, "raw_value_size": 48030, "raw_average_value_size": 500, "num_data_blocks": 7, "num_entries": 96, "num_filter_entries": 96, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008327, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008327429855, "job": 1, "event": "recovery_finished"}
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562af0d26e00
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: DB pointer 0x562af0e2e000
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   52.48 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     28.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      2/0   52.48 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     28.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     28.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     28.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 4.97 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 4.97 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562af0cfd1f0#2 capacity: 512.00 MB usage: 0.77 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 9e-06 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.34 KB,6.55651e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: mon.compute-0@-1(???) e1 preinit fsid e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: mon.compute-0@-1(???).mds e1 new map
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : fsmap 
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 24 13:18:47 np0005533938 podman[74928]: 2025-11-24 18:18:47.486975583 +0000 UTC m=+0.055760654 container create 5015cb42b9db880408ba603f90feb19254c1e9044fa7d7b1cc3c8e523c838a1b (image=quay.io/ceph/ceph:v18, name=dazzling_shockley, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 13:18:47 np0005533938 ceph-mon[74927]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 24 13:18:47 np0005533938 systemd[1]: Started libpod-conmon-5015cb42b9db880408ba603f90feb19254c1e9044fa7d7b1cc3c8e523c838a1b.scope.
Nov 24 13:18:47 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:18:47 np0005533938 podman[74928]: 2025-11-24 18:18:47.469571771 +0000 UTC m=+0.038356822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:18:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c6787e2f598e5ef98bb043eb3a0563e76f1ab2e37da4259490c7479f76381b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c6787e2f598e5ef98bb043eb3a0563e76f1ab2e37da4259490c7479f76381b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c6787e2f598e5ef98bb043eb3a0563e76f1ab2e37da4259490c7479f76381b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:47 np0005533938 podman[74928]: 2025-11-24 18:18:47.581847496 +0000 UTC m=+0.150632597 container init 5015cb42b9db880408ba603f90feb19254c1e9044fa7d7b1cc3c8e523c838a1b (image=quay.io/ceph/ceph:v18, name=dazzling_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 13:18:47 np0005533938 podman[74928]: 2025-11-24 18:18:47.587837905 +0000 UTC m=+0.156622946 container start 5015cb42b9db880408ba603f90feb19254c1e9044fa7d7b1cc3c8e523c838a1b (image=quay.io/ceph/ceph:v18, name=dazzling_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 13:18:47 np0005533938 podman[74928]: 2025-11-24 18:18:47.591714121 +0000 UTC m=+0.160499162 container attach 5015cb42b9db880408ba603f90feb19254c1e9044fa7d7b1cc3c8e523c838a1b (image=quay.io/ceph/ceph:v18, name=dazzling_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Nov 24 13:18:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Nov 24 13:18:48 np0005533938 systemd[1]: libpod-5015cb42b9db880408ba603f90feb19254c1e9044fa7d7b1cc3c8e523c838a1b.scope: Deactivated successfully.
Nov 24 13:18:48 np0005533938 podman[74928]: 2025-11-24 18:18:48.027385227 +0000 UTC m=+0.596170258 container died 5015cb42b9db880408ba603f90feb19254c1e9044fa7d7b1cc3c8e523c838a1b (image=quay.io/ceph/ceph:v18, name=dazzling_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:18:48 np0005533938 systemd[1]: var-lib-containers-storage-overlay-e0c6787e2f598e5ef98bb043eb3a0563e76f1ab2e37da4259490c7479f76381b-merged.mount: Deactivated successfully.
Nov 24 13:18:48 np0005533938 podman[74928]: 2025-11-24 18:18:48.07912762 +0000 UTC m=+0.647912701 container remove 5015cb42b9db880408ba603f90feb19254c1e9044fa7d7b1cc3c8e523c838a1b (image=quay.io/ceph/ceph:v18, name=dazzling_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:18:48 np0005533938 systemd[1]: libpod-conmon-5015cb42b9db880408ba603f90feb19254c1e9044fa7d7b1cc3c8e523c838a1b.scope: Deactivated successfully.
Nov 24 13:18:48 np0005533938 podman[75020]: 2025-11-24 18:18:48.162232261 +0000 UTC m=+0.045688584 container create 89698cdc826e9b3a1ea7f34820b285da92a1c9a52d893e025ddc008a5bf038a8 (image=quay.io/ceph/ceph:v18, name=ecstatic_bose, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 13:18:48 np0005533938 systemd[1]: Started libpod-conmon-89698cdc826e9b3a1ea7f34820b285da92a1c9a52d893e025ddc008a5bf038a8.scope.
Nov 24 13:18:48 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:18:48 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ac6a212b8dc3b67ee30bec0b38bddb9f65760b822dc03877a698c5d5b0a3d1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:48 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ac6a212b8dc3b67ee30bec0b38bddb9f65760b822dc03877a698c5d5b0a3d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:48 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ac6a212b8dc3b67ee30bec0b38bddb9f65760b822dc03877a698c5d5b0a3d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:48 np0005533938 podman[75020]: 2025-11-24 18:18:48.145492426 +0000 UTC m=+0.028948739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:18:48 np0005533938 podman[75020]: 2025-11-24 18:18:48.246365618 +0000 UTC m=+0.129822011 container init 89698cdc826e9b3a1ea7f34820b285da92a1c9a52d893e025ddc008a5bf038a8 (image=quay.io/ceph/ceph:v18, name=ecstatic_bose, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:18:48 np0005533938 podman[75020]: 2025-11-24 18:18:48.250794648 +0000 UTC m=+0.134250991 container start 89698cdc826e9b3a1ea7f34820b285da92a1c9a52d893e025ddc008a5bf038a8 (image=quay.io/ceph/ceph:v18, name=ecstatic_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:18:48 np0005533938 podman[75020]: 2025-11-24 18:18:48.254726195 +0000 UTC m=+0.138182538 container attach 89698cdc826e9b3a1ea7f34820b285da92a1c9a52d893e025ddc008a5bf038a8 (image=quay.io/ceph/ceph:v18, name=ecstatic_bose, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 13:18:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Nov 24 13:18:48 np0005533938 systemd[1]: libpod-89698cdc826e9b3a1ea7f34820b285da92a1c9a52d893e025ddc008a5bf038a8.scope: Deactivated successfully.
Nov 24 13:18:48 np0005533938 podman[75020]: 2025-11-24 18:18:48.671512823 +0000 UTC m=+0.554969206 container died 89698cdc826e9b3a1ea7f34820b285da92a1c9a52d893e025ddc008a5bf038a8 (image=quay.io/ceph/ceph:v18, name=ecstatic_bose, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 13:18:48 np0005533938 systemd[1]: var-lib-containers-storage-overlay-d3ac6a212b8dc3b67ee30bec0b38bddb9f65760b822dc03877a698c5d5b0a3d1-merged.mount: Deactivated successfully.
Nov 24 13:18:48 np0005533938 podman[75020]: 2025-11-24 18:18:48.728753863 +0000 UTC m=+0.612210196 container remove 89698cdc826e9b3a1ea7f34820b285da92a1c9a52d893e025ddc008a5bf038a8 (image=quay.io/ceph/ceph:v18, name=ecstatic_bose, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 13:18:48 np0005533938 systemd[1]: libpod-conmon-89698cdc826e9b3a1ea7f34820b285da92a1c9a52d893e025ddc008a5bf038a8.scope: Deactivated successfully.
Nov 24 13:18:48 np0005533938 systemd[1]: Reloading.
Nov 24 13:18:48 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:18:48 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:18:49 np0005533938 systemd[1]: Reloading.
Nov 24 13:18:49 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:18:49 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:18:49 np0005533938 systemd[1]: Starting Ceph mgr.compute-0.dfqptp for e5ee928f-099b-569b-93c9-ecf025cbb50d...
Nov 24 13:18:49 np0005533938 podman[75199]: 2025-11-24 18:18:49.634565899 +0000 UTC m=+0.065841804 container create 9eef9f776910beb7e6266469ef16ac3700a5a8c6b4085baaa34e42834d3065ec (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 13:18:49 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b845f98033c45bfaf39f84ded92c28d317ea5728d8257bc2709d1ffecb44de5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:49 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b845f98033c45bfaf39f84ded92c28d317ea5728d8257bc2709d1ffecb44de5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:49 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b845f98033c45bfaf39f84ded92c28d317ea5728d8257bc2709d1ffecb44de5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:49 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b845f98033c45bfaf39f84ded92c28d317ea5728d8257bc2709d1ffecb44de5e/merged/var/lib/ceph/mgr/ceph-compute-0.dfqptp supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:49 np0005533938 podman[75199]: 2025-11-24 18:18:49.692176098 +0000 UTC m=+0.123452013 container init 9eef9f776910beb7e6266469ef16ac3700a5a8c6b4085baaa34e42834d3065ec (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 13:18:49 np0005533938 podman[75199]: 2025-11-24 18:18:49.701906559 +0000 UTC m=+0.133182454 container start 9eef9f776910beb7e6266469ef16ac3700a5a8c6b4085baaa34e42834d3065ec (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 13:18:49 np0005533938 podman[75199]: 2025-11-24 18:18:49.609741863 +0000 UTC m=+0.041017798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:18:49 np0005533938 bash[75199]: 9eef9f776910beb7e6266469ef16ac3700a5a8c6b4085baaa34e42834d3065ec
Nov 24 13:18:49 np0005533938 systemd[1]: Started Ceph mgr.compute-0.dfqptp for e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 13:18:49 np0005533938 ceph-mgr[75218]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 13:18:49 np0005533938 ceph-mgr[75218]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 24 13:18:49 np0005533938 ceph-mgr[75218]: pidfile_write: ignore empty --pid-file
Nov 24 13:18:49 np0005533938 podman[75219]: 2025-11-24 18:18:49.822640034 +0000 UTC m=+0.063432545 container create abafd915e23be530285c5b7111ad4d7ff886aa7eebf68d79f9780be973716523 (image=quay.io/ceph/ceph:v18, name=priceless_jennings, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 13:18:49 np0005533938 systemd[1]: Started libpod-conmon-abafd915e23be530285c5b7111ad4d7ff886aa7eebf68d79f9780be973716523.scope.
Nov 24 13:18:49 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:18:49 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c31705bf79522c50c3bf1f55f6f506b16261a28ccb0241955ebafd688445ffab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:49 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c31705bf79522c50c3bf1f55f6f506b16261a28ccb0241955ebafd688445ffab/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:49 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c31705bf79522c50c3bf1f55f6f506b16261a28ccb0241955ebafd688445ffab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:49 np0005533938 podman[75219]: 2025-11-24 18:18:49.805271863 +0000 UTC m=+0.046064354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:18:49 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'alerts'
Nov 24 13:18:49 np0005533938 podman[75219]: 2025-11-24 18:18:49.922710806 +0000 UTC m=+0.163503287 container init abafd915e23be530285c5b7111ad4d7ff886aa7eebf68d79f9780be973716523 (image=quay.io/ceph/ceph:v18, name=priceless_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 13:18:49 np0005533938 podman[75219]: 2025-11-24 18:18:49.931726999 +0000 UTC m=+0.172519510 container start abafd915e23be530285c5b7111ad4d7ff886aa7eebf68d79f9780be973716523 (image=quay.io/ceph/ceph:v18, name=priceless_jennings, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 13:18:49 np0005533938 podman[75219]: 2025-11-24 18:18:49.93537351 +0000 UTC m=+0.176166001 container attach abafd915e23be530285c5b7111ad4d7ff886aa7eebf68d79f9780be973716523 (image=quay.io/ceph/ceph:v18, name=priceless_jennings, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 13:18:50 np0005533938 ceph-mgr[75218]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 13:18:50 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'balancer'
Nov 24 13:18:50 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:18:50.212+0000 7f1ca3816140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 13:18:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 13:18:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2861444832' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]: 
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]: {
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:    "fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:    "health": {
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "status": "HEALTH_OK",
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "checks": {},
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "mutes": []
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:    },
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:    "election_epoch": 5,
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:    "quorum": [
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        0
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:    ],
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:    "quorum_names": [
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "compute-0"
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:    ],
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:    "quorum_age": 2,
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:    "monmap": {
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "epoch": 1,
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "min_mon_release_name": "reef",
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "num_mons": 1
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:    },
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:    "osdmap": {
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "epoch": 1,
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "num_osds": 0,
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "num_up_osds": 0,
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "osd_up_since": 0,
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "num_in_osds": 0,
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "osd_in_since": 0,
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "num_remapped_pgs": 0
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:    },
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:    "pgmap": {
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "pgs_by_state": [],
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "num_pgs": 0,
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "num_pools": 0,
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "num_objects": 0,
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "data_bytes": 0,
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "bytes_used": 0,
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "bytes_avail": 0,
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "bytes_total": 0
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:    },
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:    "fsmap": {
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "epoch": 1,
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "by_rank": [],
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "up:standby": 0
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:    },
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:    "mgrmap": {
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "available": false,
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "num_standbys": 0,
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "modules": [
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:            "iostat",
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:            "nfs",
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:            "restful"
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        ],
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "services": {}
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:    },
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:    "servicemap": {
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "epoch": 1,
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "modified": "2025-11-24T18:18:44.978620+0000",
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:        "services": {}
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:    },
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]:    "progress_events": {}
Nov 24 13:18:50 np0005533938 priceless_jennings[75259]: }
Nov 24 13:18:50 np0005533938 systemd[1]: libpod-abafd915e23be530285c5b7111ad4d7ff886aa7eebf68d79f9780be973716523.scope: Deactivated successfully.
Nov 24 13:18:50 np0005533938 podman[75219]: 2025-11-24 18:18:50.334568441 +0000 UTC m=+0.575360912 container died abafd915e23be530285c5b7111ad4d7ff886aa7eebf68d79f9780be973716523 (image=quay.io/ceph/ceph:v18, name=priceless_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:18:50 np0005533938 systemd[1]: var-lib-containers-storage-overlay-c31705bf79522c50c3bf1f55f6f506b16261a28ccb0241955ebafd688445ffab-merged.mount: Deactivated successfully.
Nov 24 13:18:50 np0005533938 podman[75219]: 2025-11-24 18:18:50.377381873 +0000 UTC m=+0.618174364 container remove abafd915e23be530285c5b7111ad4d7ff886aa7eebf68d79f9780be973716523 (image=quay.io/ceph/ceph:v18, name=priceless_jennings, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 13:18:50 np0005533938 systemd[1]: libpod-conmon-abafd915e23be530285c5b7111ad4d7ff886aa7eebf68d79f9780be973716523.scope: Deactivated successfully.
Nov 24 13:18:50 np0005533938 ceph-mgr[75218]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 13:18:50 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'cephadm'
Nov 24 13:18:50 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:18:50.523+0000 7f1ca3816140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 13:18:52 np0005533938 podman[75311]: 2025-11-24 18:18:52.452871389 +0000 UTC m=+0.048187696 container create 5c3815131edaa6909ff90dc448770e47cefecc3f61b46ecadd9c47ea9aa0d5cb (image=quay.io/ceph/ceph:v18, name=quirky_shockley, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:18:52 np0005533938 systemd[1]: Started libpod-conmon-5c3815131edaa6909ff90dc448770e47cefecc3f61b46ecadd9c47ea9aa0d5cb.scope.
Nov 24 13:18:52 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:18:52 np0005533938 podman[75311]: 2025-11-24 18:18:52.426751142 +0000 UTC m=+0.022067459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:18:52 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1d47cb8a0b5de75cdfd19be374299a8a4869afddca4a276aa41ca1d6d8cbad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:52 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1d47cb8a0b5de75cdfd19be374299a8a4869afddca4a276aa41ca1d6d8cbad/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:52 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1d47cb8a0b5de75cdfd19be374299a8a4869afddca4a276aa41ca1d6d8cbad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:52 np0005533938 podman[75311]: 2025-11-24 18:18:52.536660758 +0000 UTC m=+0.131977065 container init 5c3815131edaa6909ff90dc448770e47cefecc3f61b46ecadd9c47ea9aa0d5cb (image=quay.io/ceph/ceph:v18, name=quirky_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:18:52 np0005533938 podman[75311]: 2025-11-24 18:18:52.543000865 +0000 UTC m=+0.138317212 container start 5c3815131edaa6909ff90dc448770e47cefecc3f61b46ecadd9c47ea9aa0d5cb (image=quay.io/ceph/ceph:v18, name=quirky_shockley, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 13:18:52 np0005533938 podman[75311]: 2025-11-24 18:18:52.547717102 +0000 UTC m=+0.143033409 container attach 5c3815131edaa6909ff90dc448770e47cefecc3f61b46ecadd9c47ea9aa0d5cb (image=quay.io/ceph/ceph:v18, name=quirky_shockley, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 13:18:52 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'crash'
Nov 24 13:18:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 13:18:52 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3006523604' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]: 
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]: {
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:    "fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:    "health": {
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "status": "HEALTH_OK",
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "checks": {},
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "mutes": []
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:    },
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:    "election_epoch": 5,
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:    "quorum": [
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        0
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:    ],
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:    "quorum_names": [
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "compute-0"
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:    ],
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:    "quorum_age": 5,
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:    "monmap": {
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "epoch": 1,
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "min_mon_release_name": "reef",
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "num_mons": 1
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:    },
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:    "osdmap": {
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "epoch": 1,
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "num_osds": 0,
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "num_up_osds": 0,
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "osd_up_since": 0,
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "num_in_osds": 0,
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "osd_in_since": 0,
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "num_remapped_pgs": 0
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:    },
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:    "pgmap": {
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "pgs_by_state": [],
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "num_pgs": 0,
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "num_pools": 0,
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "num_objects": 0,
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "data_bytes": 0,
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "bytes_used": 0,
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "bytes_avail": 0,
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "bytes_total": 0
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:    },
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:    "fsmap": {
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "epoch": 1,
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "by_rank": [],
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "up:standby": 0
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:    },
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:    "mgrmap": {
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "available": false,
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "num_standbys": 0,
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "modules": [
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:            "iostat",
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:            "nfs",
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:            "restful"
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        ],
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "services": {}
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:    },
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:    "servicemap": {
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "epoch": 1,
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "modified": "2025-11-24T18:18:44.978620+0000",
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:        "services": {}
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:    },
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]:    "progress_events": {}
Nov 24 13:18:52 np0005533938 quirky_shockley[75328]: }
Nov 24 13:18:52 np0005533938 systemd[1]: libpod-5c3815131edaa6909ff90dc448770e47cefecc3f61b46ecadd9c47ea9aa0d5cb.scope: Deactivated successfully.
Nov 24 13:18:52 np0005533938 podman[75311]: 2025-11-24 18:18:52.956428839 +0000 UTC m=+0.551745156 container died 5c3815131edaa6909ff90dc448770e47cefecc3f61b46ecadd9c47ea9aa0d5cb (image=quay.io/ceph/ceph:v18, name=quirky_shockley, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:18:52 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:18:52.957+0000 7f1ca3816140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 13:18:52 np0005533938 ceph-mgr[75218]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 13:18:52 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'dashboard'
Nov 24 13:18:52 np0005533938 systemd[1]: var-lib-containers-storage-overlay-ff1d47cb8a0b5de75cdfd19be374299a8a4869afddca4a276aa41ca1d6d8cbad-merged.mount: Deactivated successfully.
Nov 24 13:18:53 np0005533938 podman[75311]: 2025-11-24 18:18:53.004749977 +0000 UTC m=+0.600066284 container remove 5c3815131edaa6909ff90dc448770e47cefecc3f61b46ecadd9c47ea9aa0d5cb (image=quay.io/ceph/ceph:v18, name=quirky_shockley, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:18:53 np0005533938 systemd[1]: libpod-conmon-5c3815131edaa6909ff90dc448770e47cefecc3f61b46ecadd9c47ea9aa0d5cb.scope: Deactivated successfully.
Nov 24 13:18:54 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'devicehealth'
Nov 24 13:18:54 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:18:54.669+0000 7f1ca3816140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 13:18:54 np0005533938 ceph-mgr[75218]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 13:18:54 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'diskprediction_local'
Nov 24 13:18:55 np0005533938 podman[75366]: 2025-11-24 18:18:55.082465559 +0000 UTC m=+0.039343067 container create ed37a08c2e12a5522d21f12bc395f926339ee6380c58104c087c64a4b6e4297c (image=quay.io/ceph/ceph:v18, name=friendly_yalow, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 13:18:55 np0005533938 systemd[1]: Started libpod-conmon-ed37a08c2e12a5522d21f12bc395f926339ee6380c58104c087c64a4b6e4297c.scope.
Nov 24 13:18:55 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:18:55 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30b45e9beb5ce14d4310c6381014df7801ea16001851d821c39c140c0eac3d08/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:55 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30b45e9beb5ce14d4310c6381014df7801ea16001851d821c39c140c0eac3d08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:55 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30b45e9beb5ce14d4310c6381014df7801ea16001851d821c39c140c0eac3d08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:55 np0005533938 podman[75366]: 2025-11-24 18:18:55.064574445 +0000 UTC m=+0.021451953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:18:55 np0005533938 podman[75366]: 2025-11-24 18:18:55.168210936 +0000 UTC m=+0.125088464 container init ed37a08c2e12a5522d21f12bc395f926339ee6380c58104c087c64a4b6e4297c (image=quay.io/ceph/ceph:v18, name=friendly_yalow, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:18:55 np0005533938 podman[75366]: 2025-11-24 18:18:55.175552668 +0000 UTC m=+0.132430176 container start ed37a08c2e12a5522d21f12bc395f926339ee6380c58104c087c64a4b6e4297c (image=quay.io/ceph/ceph:v18, name=friendly_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 13:18:55 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 24 13:18:55 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 24 13:18:55 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]:  from numpy import show_config as show_numpy_config
Nov 24 13:18:55 np0005533938 podman[75366]: 2025-11-24 18:18:55.179117286 +0000 UTC m=+0.135994814 container attach ed37a08c2e12a5522d21f12bc395f926339ee6380c58104c087c64a4b6e4297c (image=quay.io/ceph/ceph:v18, name=friendly_yalow, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 13:18:55 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:18:55.187+0000 7f1ca3816140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 13:18:55 np0005533938 ceph-mgr[75218]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 13:18:55 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'influx'
Nov 24 13:18:55 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:18:55.436+0000 7f1ca3816140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 13:18:55 np0005533938 ceph-mgr[75218]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 13:18:55 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'insights'
Nov 24 13:18:55 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 13:18:55 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1034015369' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]: 
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]: {
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:    "fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:    "health": {
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "status": "HEALTH_OK",
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "checks": {},
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "mutes": []
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:    },
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:    "election_epoch": 5,
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:    "quorum": [
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        0
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:    ],
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:    "quorum_names": [
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "compute-0"
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:    ],
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:    "quorum_age": 8,
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:    "monmap": {
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "epoch": 1,
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "min_mon_release_name": "reef",
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "num_mons": 1
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:    },
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:    "osdmap": {
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "epoch": 1,
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "num_osds": 0,
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "num_up_osds": 0,
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "osd_up_since": 0,
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "num_in_osds": 0,
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "osd_in_since": 0,
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "num_remapped_pgs": 0
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:    },
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:    "pgmap": {
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "pgs_by_state": [],
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "num_pgs": 0,
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "num_pools": 0,
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "num_objects": 0,
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "data_bytes": 0,
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "bytes_used": 0,
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "bytes_avail": 0,
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "bytes_total": 0
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:    },
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:    "fsmap": {
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "epoch": 1,
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "by_rank": [],
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "up:standby": 0
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:    },
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:    "mgrmap": {
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "available": false,
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "num_standbys": 0,
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "modules": [
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:            "iostat",
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:            "nfs",
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:            "restful"
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        ],
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "services": {}
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:    },
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:    "servicemap": {
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "epoch": 1,
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "modified": "2025-11-24T18:18:44.978620+0000",
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:        "services": {}
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:    },
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]:    "progress_events": {}
Nov 24 13:18:55 np0005533938 friendly_yalow[75382]: }
Nov 24 13:18:55 np0005533938 systemd[1]: libpod-ed37a08c2e12a5522d21f12bc395f926339ee6380c58104c087c64a4b6e4297c.scope: Deactivated successfully.
Nov 24 13:18:55 np0005533938 podman[75408]: 2025-11-24 18:18:55.665008008 +0000 UTC m=+0.026455627 container died ed37a08c2e12a5522d21f12bc395f926339ee6380c58104c087c64a4b6e4297c (image=quay.io/ceph/ceph:v18, name=friendly_yalow, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 13:18:55 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'iostat'
Nov 24 13:18:55 np0005533938 systemd[1]: var-lib-containers-storage-overlay-30b45e9beb5ce14d4310c6381014df7801ea16001851d821c39c140c0eac3d08-merged.mount: Deactivated successfully.
Nov 24 13:18:55 np0005533938 podman[75408]: 2025-11-24 18:18:55.711024369 +0000 UTC m=+0.072471968 container remove ed37a08c2e12a5522d21f12bc395f926339ee6380c58104c087c64a4b6e4297c (image=quay.io/ceph/ceph:v18, name=friendly_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:18:55 np0005533938 systemd[1]: libpod-conmon-ed37a08c2e12a5522d21f12bc395f926339ee6380c58104c087c64a4b6e4297c.scope: Deactivated successfully.
Nov 24 13:18:55 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:18:55.907+0000 7f1ca3816140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 13:18:55 np0005533938 ceph-mgr[75218]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 13:18:55 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'k8sevents'
Nov 24 13:18:57 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'localpool'
Nov 24 13:18:57 np0005533938 podman[75421]: 2025-11-24 18:18:57.829158123 +0000 UTC m=+0.055273751 container create 480acad95ab7e59f7f8eada69aa4598da5533bbe4d360ea506dd106ed9114bb0 (image=quay.io/ceph/ceph:v18, name=unruffled_mcnulty, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 13:18:57 np0005533938 systemd[1]: Started libpod-conmon-480acad95ab7e59f7f8eada69aa4598da5533bbe4d360ea506dd106ed9114bb0.scope.
Nov 24 13:18:57 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:18:57 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657ee8b255f4ac1e7ec4396e02a2065ee59f367fc8b7243cd6e152a320d4b783/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:57 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657ee8b255f4ac1e7ec4396e02a2065ee59f367fc8b7243cd6e152a320d4b783/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:57 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657ee8b255f4ac1e7ec4396e02a2065ee59f367fc8b7243cd6e152a320d4b783/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:18:57 np0005533938 podman[75421]: 2025-11-24 18:18:57.812198973 +0000 UTC m=+0.038314611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:18:57 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'mds_autoscaler'
Nov 24 13:18:57 np0005533938 podman[75421]: 2025-11-24 18:18:57.920425127 +0000 UTC m=+0.146540785 container init 480acad95ab7e59f7f8eada69aa4598da5533bbe4d360ea506dd106ed9114bb0 (image=quay.io/ceph/ceph:v18, name=unruffled_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:18:57 np0005533938 podman[75421]: 2025-11-24 18:18:57.927291587 +0000 UTC m=+0.153407195 container start 480acad95ab7e59f7f8eada69aa4598da5533bbe4d360ea506dd106ed9114bb0 (image=quay.io/ceph/ceph:v18, name=unruffled_mcnulty, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:18:57 np0005533938 podman[75421]: 2025-11-24 18:18:57.930299462 +0000 UTC m=+0.156415140 container attach 480acad95ab7e59f7f8eada69aa4598da5533bbe4d360ea506dd106ed9114bb0 (image=quay.io/ceph/ceph:v18, name=unruffled_mcnulty, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 13:18:58 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 13:18:58 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2975182773' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]: 
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]: {
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:    "fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:    "health": {
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "status": "HEALTH_OK",
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "checks": {},
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "mutes": []
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:    },
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:    "election_epoch": 5,
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:    "quorum": [
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        0
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:    ],
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:    "quorum_names": [
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "compute-0"
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:    ],
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:    "quorum_age": 10,
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:    "monmap": {
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "epoch": 1,
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "min_mon_release_name": "reef",
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "num_mons": 1
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:    },
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:    "osdmap": {
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "epoch": 1,
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "num_osds": 0,
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "num_up_osds": 0,
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "osd_up_since": 0,
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "num_in_osds": 0,
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "osd_in_since": 0,
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "num_remapped_pgs": 0
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:    },
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:    "pgmap": {
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "pgs_by_state": [],
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "num_pgs": 0,
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "num_pools": 0,
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "num_objects": 0,
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "data_bytes": 0,
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "bytes_used": 0,
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "bytes_avail": 0,
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "bytes_total": 0
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:    },
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:    "fsmap": {
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "epoch": 1,
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "by_rank": [],
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "up:standby": 0
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:    },
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:    "mgrmap": {
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "available": false,
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "num_standbys": 0,
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "modules": [
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:            "iostat",
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:            "nfs",
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:            "restful"
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        ],
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "services": {}
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:    },
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:    "servicemap": {
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "epoch": 1,
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "modified": "2025-11-24T18:18:44.978620+0000",
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:        "services": {}
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:    },
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]:    "progress_events": {}
Nov 24 13:18:58 np0005533938 unruffled_mcnulty[75437]: }
Nov 24 13:18:58 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'mirroring'
Nov 24 13:18:58 np0005533938 systemd[1]: libpod-480acad95ab7e59f7f8eada69aa4598da5533bbe4d360ea506dd106ed9114bb0.scope: Deactivated successfully.
Nov 24 13:18:58 np0005533938 podman[75421]: 2025-11-24 18:18:58.610016771 +0000 UTC m=+0.836132399 container died 480acad95ab7e59f7f8eada69aa4598da5533bbe4d360ea506dd106ed9114bb0 (image=quay.io/ceph/ceph:v18, name=unruffled_mcnulty, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 13:18:58 np0005533938 systemd[1]: var-lib-containers-storage-overlay-657ee8b255f4ac1e7ec4396e02a2065ee59f367fc8b7243cd6e152a320d4b783-merged.mount: Deactivated successfully.
Nov 24 13:18:58 np0005533938 podman[75421]: 2025-11-24 18:18:58.651889469 +0000 UTC m=+0.878005087 container remove 480acad95ab7e59f7f8eada69aa4598da5533bbe4d360ea506dd106ed9114bb0 (image=quay.io/ceph/ceph:v18, name=unruffled_mcnulty, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:18:58 np0005533938 systemd[1]: libpod-conmon-480acad95ab7e59f7f8eada69aa4598da5533bbe4d360ea506dd106ed9114bb0.scope: Deactivated successfully.
Nov 24 13:18:58 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'nfs'
Nov 24 13:18:59 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:18:59.524+0000 7f1ca3816140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 13:18:59 np0005533938 ceph-mgr[75218]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 13:18:59 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'orchestrator'
Nov 24 13:19:00 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:00.185+0000 7f1ca3816140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 13:19:00 np0005533938 ceph-mgr[75218]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 13:19:00 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'osd_perf_query'
Nov 24 13:19:00 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:00.449+0000 7f1ca3816140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 13:19:00 np0005533938 ceph-mgr[75218]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 13:19:00 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'osd_support'
Nov 24 13:19:00 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:00.676+0000 7f1ca3816140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 13:19:00 np0005533938 ceph-mgr[75218]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 13:19:00 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'pg_autoscaler'
Nov 24 13:19:00 np0005533938 podman[75475]: 2025-11-24 18:19:00.728391592 +0000 UTC m=+0.046162856 container create 6b7820ef2f88a0d6590e0b889a334bc53726e9240a8ac455a853f3010af63442 (image=quay.io/ceph/ceph:v18, name=flamboyant_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 24 13:19:00 np0005533938 systemd[1]: Started libpod-conmon-6b7820ef2f88a0d6590e0b889a334bc53726e9240a8ac455a853f3010af63442.scope.
Nov 24 13:19:00 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:00 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965679880d523cc9f4b24aaa61242bc24d5dc63c3bba3ab9394bf0cb3818998c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:00 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965679880d523cc9f4b24aaa61242bc24d5dc63c3bba3ab9394bf0cb3818998c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:00 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965679880d523cc9f4b24aaa61242bc24d5dc63c3bba3ab9394bf0cb3818998c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:00 np0005533938 podman[75475]: 2025-11-24 18:19:00.712103828 +0000 UTC m=+0.029875102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:00 np0005533938 podman[75475]: 2025-11-24 18:19:00.828211728 +0000 UTC m=+0.145982982 container init 6b7820ef2f88a0d6590e0b889a334bc53726e9240a8ac455a853f3010af63442 (image=quay.io/ceph/ceph:v18, name=flamboyant_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:19:00 np0005533938 podman[75475]: 2025-11-24 18:19:00.838440071 +0000 UTC m=+0.156211365 container start 6b7820ef2f88a0d6590e0b889a334bc53726e9240a8ac455a853f3010af63442 (image=quay.io/ceph/ceph:v18, name=flamboyant_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 24 13:19:00 np0005533938 podman[75475]: 2025-11-24 18:19:00.844952213 +0000 UTC m=+0.162723487 container attach 6b7820ef2f88a0d6590e0b889a334bc53726e9240a8ac455a853f3010af63442 (image=quay.io/ceph/ceph:v18, name=flamboyant_cannon, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:19:00 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:00.962+0000 7f1ca3816140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 13:19:00 np0005533938 ceph-mgr[75218]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 13:19:00 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'progress'
Nov 24 13:19:01 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 13:19:01 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4211118639' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]: 
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]: {
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:    "fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:    "health": {
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "status": "HEALTH_OK",
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "checks": {},
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "mutes": []
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:    },
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:    "election_epoch": 5,
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:    "quorum": [
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        0
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:    ],
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:    "quorum_names": [
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "compute-0"
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:    ],
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:    "quorum_age": 13,
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:    "monmap": {
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "epoch": 1,
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "min_mon_release_name": "reef",
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "num_mons": 1
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:    },
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:    "osdmap": {
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "epoch": 1,
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "num_osds": 0,
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "num_up_osds": 0,
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "osd_up_since": 0,
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "num_in_osds": 0,
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "osd_in_since": 0,
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "num_remapped_pgs": 0
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:    },
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:    "pgmap": {
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "pgs_by_state": [],
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "num_pgs": 0,
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "num_pools": 0,
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "num_objects": 0,
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "data_bytes": 0,
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "bytes_used": 0,
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "bytes_avail": 0,
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "bytes_total": 0
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:    },
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:    "fsmap": {
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "epoch": 1,
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "by_rank": [],
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "up:standby": 0
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:    },
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:    "mgrmap": {
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "available": false,
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "num_standbys": 0,
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "modules": [
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:            "iostat",
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:            "nfs",
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:            "restful"
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        ],
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "services": {}
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:    },
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:    "servicemap": {
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "epoch": 1,
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "modified": "2025-11-24T18:18:44.978620+0000",
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:        "services": {}
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:    },
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]:    "progress_events": {}
Nov 24 13:19:01 np0005533938 flamboyant_cannon[75491]: }
Nov 24 13:19:01 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:01.214+0000 7f1ca3816140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 13:19:01 np0005533938 ceph-mgr[75218]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 13:19:01 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'prometheus'
Nov 24 13:19:01 np0005533938 systemd[1]: libpod-6b7820ef2f88a0d6590e0b889a334bc53726e9240a8ac455a853f3010af63442.scope: Deactivated successfully.
Nov 24 13:19:01 np0005533938 podman[75475]: 2025-11-24 18:19:01.218932719 +0000 UTC m=+0.536703973 container died 6b7820ef2f88a0d6590e0b889a334bc53726e9240a8ac455a853f3010af63442 (image=quay.io/ceph/ceph:v18, name=flamboyant_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 13:19:01 np0005533938 systemd[1]: var-lib-containers-storage-overlay-965679880d523cc9f4b24aaa61242bc24d5dc63c3bba3ab9394bf0cb3818998c-merged.mount: Deactivated successfully.
Nov 24 13:19:01 np0005533938 podman[75475]: 2025-11-24 18:19:01.257795542 +0000 UTC m=+0.575566796 container remove 6b7820ef2f88a0d6590e0b889a334bc53726e9240a8ac455a853f3010af63442 (image=quay.io/ceph/ceph:v18, name=flamboyant_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:19:01 np0005533938 systemd[1]: libpod-conmon-6b7820ef2f88a0d6590e0b889a334bc53726e9240a8ac455a853f3010af63442.scope: Deactivated successfully.
Nov 24 13:19:02 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:02.316+0000 7f1ca3816140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 13:19:02 np0005533938 ceph-mgr[75218]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 13:19:02 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'rbd_support'
Nov 24 13:19:02 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:02.626+0000 7f1ca3816140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 13:19:02 np0005533938 ceph-mgr[75218]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 13:19:02 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'restful'
Nov 24 13:19:03 np0005533938 podman[75530]: 2025-11-24 18:19:03.347111852 +0000 UTC m=+0.053230011 container create 462319ce44727e444dd548ac648362779a8d47a6db192788afdde5a8b7ac0819 (image=quay.io/ceph/ceph:v18, name=nifty_lamport, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 13:19:03 np0005533938 systemd[1]: Started libpod-conmon-462319ce44727e444dd548ac648362779a8d47a6db192788afdde5a8b7ac0819.scope.
Nov 24 13:19:03 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:03 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8584d9b835296ce7c947453ea976fd00f5d7d81a0c15117b4a55dd36f4404e0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:03 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8584d9b835296ce7c947453ea976fd00f5d7d81a0c15117b4a55dd36f4404e0b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:03 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8584d9b835296ce7c947453ea976fd00f5d7d81a0c15117b4a55dd36f4404e0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:03 np0005533938 podman[75530]: 2025-11-24 18:19:03.323622359 +0000 UTC m=+0.029740508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:03 np0005533938 podman[75530]: 2025-11-24 18:19:03.421348713 +0000 UTC m=+0.127466842 container init 462319ce44727e444dd548ac648362779a8d47a6db192788afdde5a8b7ac0819 (image=quay.io/ceph/ceph:v18, name=nifty_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:19:03 np0005533938 podman[75530]: 2025-11-24 18:19:03.42808059 +0000 UTC m=+0.134198709 container start 462319ce44727e444dd548ac648362779a8d47a6db192788afdde5a8b7ac0819 (image=quay.io/ceph/ceph:v18, name=nifty_lamport, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 13:19:03 np0005533938 podman[75530]: 2025-11-24 18:19:03.431277459 +0000 UTC m=+0.137395578 container attach 462319ce44727e444dd548ac648362779a8d47a6db192788afdde5a8b7ac0819 (image=quay.io/ceph/ceph:v18, name=nifty_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 13:19:03 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'rgw'
Nov 24 13:19:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 13:19:03 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2243591140' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]: 
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]: {
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:    "fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:    "health": {
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "status": "HEALTH_OK",
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "checks": {},
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "mutes": []
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:    },
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:    "election_epoch": 5,
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:    "quorum": [
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        0
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:    ],
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:    "quorum_names": [
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "compute-0"
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:    ],
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:    "quorum_age": 16,
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:    "monmap": {
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "epoch": 1,
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "min_mon_release_name": "reef",
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "num_mons": 1
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:    },
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:    "osdmap": {
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "epoch": 1,
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "num_osds": 0,
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "num_up_osds": 0,
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "osd_up_since": 0,
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "num_in_osds": 0,
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "osd_in_since": 0,
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "num_remapped_pgs": 0
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:    },
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:    "pgmap": {
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "pgs_by_state": [],
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "num_pgs": 0,
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "num_pools": 0,
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "num_objects": 0,
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "data_bytes": 0,
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "bytes_used": 0,
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "bytes_avail": 0,
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "bytes_total": 0
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:    },
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:    "fsmap": {
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "epoch": 1,
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "by_rank": [],
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "up:standby": 0
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:    },
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:    "mgrmap": {
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "available": false,
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "num_standbys": 0,
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "modules": [
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:            "iostat",
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:            "nfs",
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:            "restful"
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        ],
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "services": {}
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:    },
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:    "servicemap": {
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "epoch": 1,
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "modified": "2025-11-24T18:18:44.978620+0000",
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:        "services": {}
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:    },
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]:    "progress_events": {}
Nov 24 13:19:03 np0005533938 nifty_lamport[75546]: }
Nov 24 13:19:03 np0005533938 systemd[1]: libpod-462319ce44727e444dd548ac648362779a8d47a6db192788afdde5a8b7ac0819.scope: Deactivated successfully.
Nov 24 13:19:03 np0005533938 podman[75530]: 2025-11-24 18:19:03.819861467 +0000 UTC m=+0.525979586 container died 462319ce44727e444dd548ac648362779a8d47a6db192788afdde5a8b7ac0819 (image=quay.io/ceph/ceph:v18, name=nifty_lamport, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:19:03 np0005533938 systemd[1]: var-lib-containers-storage-overlay-8584d9b835296ce7c947453ea976fd00f5d7d81a0c15117b4a55dd36f4404e0b-merged.mount: Deactivated successfully.
Nov 24 13:19:03 np0005533938 podman[75530]: 2025-11-24 18:19:03.863610172 +0000 UTC m=+0.569728321 container remove 462319ce44727e444dd548ac648362779a8d47a6db192788afdde5a8b7ac0819 (image=quay.io/ceph/ceph:v18, name=nifty_lamport, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:19:03 np0005533938 systemd[1]: libpod-conmon-462319ce44727e444dd548ac648362779a8d47a6db192788afdde5a8b7ac0819.scope: Deactivated successfully.
Nov 24 13:19:04 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:04.239+0000 7f1ca3816140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 13:19:04 np0005533938 ceph-mgr[75218]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 13:19:04 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'rook'
Nov 24 13:19:05 np0005533938 podman[75586]: 2025-11-24 18:19:05.923283127 +0000 UTC m=+0.039548762 container create b8625a796ff8e7a96d64692400a427fde1baa745731ee9bae6ba6aea87fbd6a0 (image=quay.io/ceph/ceph:v18, name=interesting_torvalds, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 13:19:05 np0005533938 systemd[1]: Started libpod-conmon-b8625a796ff8e7a96d64692400a427fde1baa745731ee9bae6ba6aea87fbd6a0.scope.
Nov 24 13:19:05 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:06 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54f6e2a144b277419f749c6747eb54ee57afc0c39e1f37190e5fcca30c686473/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:06 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54f6e2a144b277419f749c6747eb54ee57afc0c39e1f37190e5fcca30c686473/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:06 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54f6e2a144b277419f749c6747eb54ee57afc0c39e1f37190e5fcca30c686473/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:06 np0005533938 podman[75586]: 2025-11-24 18:19:05.904771958 +0000 UTC m=+0.021037613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:06 np0005533938 podman[75586]: 2025-11-24 18:19:06.014417927 +0000 UTC m=+0.130683662 container init b8625a796ff8e7a96d64692400a427fde1baa745731ee9bae6ba6aea87fbd6a0 (image=quay.io/ceph/ceph:v18, name=interesting_torvalds, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:19:06 np0005533938 podman[75586]: 2025-11-24 18:19:06.019470112 +0000 UTC m=+0.135735757 container start b8625a796ff8e7a96d64692400a427fde1baa745731ee9bae6ba6aea87fbd6a0 (image=quay.io/ceph/ceph:v18, name=interesting_torvalds, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:19:06 np0005533938 podman[75586]: 2025-11-24 18:19:06.023144394 +0000 UTC m=+0.139410069 container attach b8625a796ff8e7a96d64692400a427fde1baa745731ee9bae6ba6aea87fbd6a0 (image=quay.io/ceph/ceph:v18, name=interesting_torvalds, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 13:19:06 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:06.317+0000 7f1ca3816140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 13:19:06 np0005533938 ceph-mgr[75218]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 13:19:06 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'selftest'
Nov 24 13:19:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 13:19:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/713541516' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]: 
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]: {
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:    "fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:    "health": {
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "status": "HEALTH_OK",
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "checks": {},
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "mutes": []
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:    },
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:    "election_epoch": 5,
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:    "quorum": [
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        0
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:    ],
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:    "quorum_names": [
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "compute-0"
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:    ],
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:    "quorum_age": 18,
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:    "monmap": {
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "epoch": 1,
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "min_mon_release_name": "reef",
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "num_mons": 1
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:    },
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:    "osdmap": {
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "epoch": 1,
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "num_osds": 0,
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "num_up_osds": 0,
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "osd_up_since": 0,
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "num_in_osds": 0,
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "osd_in_since": 0,
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "num_remapped_pgs": 0
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:    },
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:    "pgmap": {
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "pgs_by_state": [],
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "num_pgs": 0,
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "num_pools": 0,
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "num_objects": 0,
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "data_bytes": 0,
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "bytes_used": 0,
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "bytes_avail": 0,
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "bytes_total": 0
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:    },
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:    "fsmap": {
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "epoch": 1,
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "by_rank": [],
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "up:standby": 0
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:    },
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:    "mgrmap": {
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "available": false,
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "num_standbys": 0,
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "modules": [
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:            "iostat",
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:            "nfs",
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:            "restful"
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        ],
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "services": {}
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:    },
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:    "servicemap": {
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "epoch": 1,
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "modified": "2025-11-24T18:18:44.978620+0000",
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:        "services": {}
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:    },
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]:    "progress_events": {}
Nov 24 13:19:06 np0005533938 interesting_torvalds[75603]: }
Nov 24 13:19:06 np0005533938 systemd[1]: libpod-b8625a796ff8e7a96d64692400a427fde1baa745731ee9bae6ba6aea87fbd6a0.scope: Deactivated successfully.
Nov 24 13:19:06 np0005533938 podman[75586]: 2025-11-24 18:19:06.438793423 +0000 UTC m=+0.555059088 container died b8625a796ff8e7a96d64692400a427fde1baa745731ee9bae6ba6aea87fbd6a0 (image=quay.io/ceph/ceph:v18, name=interesting_torvalds, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:19:06 np0005533938 systemd[1]: var-lib-containers-storage-overlay-54f6e2a144b277419f749c6747eb54ee57afc0c39e1f37190e5fcca30c686473-merged.mount: Deactivated successfully.
Nov 24 13:19:06 np0005533938 podman[75586]: 2025-11-24 18:19:06.499267083 +0000 UTC m=+0.615532748 container remove b8625a796ff8e7a96d64692400a427fde1baa745731ee9bae6ba6aea87fbd6a0 (image=quay.io/ceph/ceph:v18, name=interesting_torvalds, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Nov 24 13:19:06 np0005533938 systemd[1]: libpod-conmon-b8625a796ff8e7a96d64692400a427fde1baa745731ee9bae6ba6aea87fbd6a0.scope: Deactivated successfully.
Nov 24 13:19:06 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:06.576+0000 7f1ca3816140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 13:19:06 np0005533938 ceph-mgr[75218]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 13:19:06 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'snap_schedule'
Nov 24 13:19:06 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:06.830+0000 7f1ca3816140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 13:19:06 np0005533938 ceph-mgr[75218]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 13:19:06 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'stats'
Nov 24 13:19:07 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'status'
Nov 24 13:19:07 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:07.369+0000 7f1ca3816140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 13:19:07 np0005533938 ceph-mgr[75218]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 13:19:07 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'telegraf'
Nov 24 13:19:07 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:07.618+0000 7f1ca3816140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 13:19:07 np0005533938 ceph-mgr[75218]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 13:19:07 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'telemetry'
Nov 24 13:19:08 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:08.222+0000 7f1ca3816140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 13:19:08 np0005533938 ceph-mgr[75218]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 13:19:08 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'test_orchestrator'
Nov 24 13:19:08 np0005533938 podman[75641]: 2025-11-24 18:19:08.615173962 +0000 UTC m=+0.066489090 container create 90bb428af2e8435e2a7fb887b904a1f144d5709841a30deea9f639dbd6824065 (image=quay.io/ceph/ceph:v18, name=vigorous_cartwright, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 13:19:08 np0005533938 systemd[1]: Started libpod-conmon-90bb428af2e8435e2a7fb887b904a1f144d5709841a30deea9f639dbd6824065.scope.
Nov 24 13:19:08 np0005533938 podman[75641]: 2025-11-24 18:19:08.592581152 +0000 UTC m=+0.043896380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:08 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:08 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23128c87f29dee7f0b7268971ac6dcb47ffb8fe66ae4da34308303e1d9dd57c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:08 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23128c87f29dee7f0b7268971ac6dcb47ffb8fe66ae4da34308303e1d9dd57c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:08 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23128c87f29dee7f0b7268971ac6dcb47ffb8fe66ae4da34308303e1d9dd57c8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:08 np0005533938 podman[75641]: 2025-11-24 18:19:08.707634345 +0000 UTC m=+0.158949483 container init 90bb428af2e8435e2a7fb887b904a1f144d5709841a30deea9f639dbd6824065 (image=quay.io/ceph/ceph:v18, name=vigorous_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 13:19:08 np0005533938 podman[75641]: 2025-11-24 18:19:08.72276427 +0000 UTC m=+0.174079408 container start 90bb428af2e8435e2a7fb887b904a1f144d5709841a30deea9f639dbd6824065 (image=quay.io/ceph/ceph:v18, name=vigorous_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 13:19:08 np0005533938 podman[75641]: 2025-11-24 18:19:08.726587565 +0000 UTC m=+0.177902683 container attach 90bb428af2e8435e2a7fb887b904a1f144d5709841a30deea9f639dbd6824065 (image=quay.io/ceph/ceph:v18, name=vigorous_cartwright, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 13:19:08 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:08.934+0000 7f1ca3816140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 13:19:08 np0005533938 ceph-mgr[75218]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 13:19:08 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'volumes'
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2857565935' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]: 
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]: {
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:    "fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:    "health": {
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "status": "HEALTH_OK",
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "checks": {},
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "mutes": []
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:    },
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:    "election_epoch": 5,
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:    "quorum": [
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        0
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:    ],
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:    "quorum_names": [
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "compute-0"
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:    ],
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:    "quorum_age": 21,
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:    "monmap": {
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "epoch": 1,
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "min_mon_release_name": "reef",
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "num_mons": 1
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:    },
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:    "osdmap": {
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "epoch": 1,
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "num_osds": 0,
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "num_up_osds": 0,
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "osd_up_since": 0,
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "num_in_osds": 0,
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "osd_in_since": 0,
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "num_remapped_pgs": 0
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:    },
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:    "pgmap": {
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "pgs_by_state": [],
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "num_pgs": 0,
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "num_pools": 0,
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "num_objects": 0,
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "data_bytes": 0,
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "bytes_used": 0,
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "bytes_avail": 0,
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "bytes_total": 0
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:    },
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:    "fsmap": {
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "epoch": 1,
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "by_rank": [],
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "up:standby": 0
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:    },
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:    "mgrmap": {
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "available": false,
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "num_standbys": 0,
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "modules": [
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:            "iostat",
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:            "nfs",
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:            "restful"
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        ],
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "services": {}
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:    },
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:    "servicemap": {
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "epoch": 1,
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "modified": "2025-11-24T18:18:44.978620+0000",
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:        "services": {}
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:    },
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]:    "progress_events": {}
Nov 24 13:19:09 np0005533938 vigorous_cartwright[75658]: }
Nov 24 13:19:09 np0005533938 systemd[1]: libpod-90bb428af2e8435e2a7fb887b904a1f144d5709841a30deea9f639dbd6824065.scope: Deactivated successfully.
Nov 24 13:19:09 np0005533938 podman[75641]: 2025-11-24 18:19:09.094334056 +0000 UTC m=+0.545649174 container died 90bb428af2e8435e2a7fb887b904a1f144d5709841a30deea9f639dbd6824065 (image=quay.io/ceph/ceph:v18, name=vigorous_cartwright, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 13:19:09 np0005533938 systemd[1]: var-lib-containers-storage-overlay-23128c87f29dee7f0b7268971ac6dcb47ffb8fe66ae4da34308303e1d9dd57c8-merged.mount: Deactivated successfully.
Nov 24 13:19:09 np0005533938 podman[75641]: 2025-11-24 18:19:09.133408646 +0000 UTC m=+0.584723764 container remove 90bb428af2e8435e2a7fb887b904a1f144d5709841a30deea9f639dbd6824065 (image=quay.io/ceph/ceph:v18, name=vigorous_cartwright, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:19:09 np0005533938 systemd[1]: libpod-conmon-90bb428af2e8435e2a7fb887b904a1f144d5709841a30deea9f639dbd6824065.scope: Deactivated successfully.
Nov 24 13:19:09 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:09.656+0000 7f1ca3816140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'zabbix'
Nov 24 13:19:09 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:09.909+0000 7f1ca3816140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: ms_deliver_dispatch: unhandled message 0x5607d49131e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.dfqptp
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: mgr handle_mgr_map Activating!
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: mgr handle_mgr_map I am now activating
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.dfqptp(active, starting, since 0.0133003s)
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).mds e1 all = 1
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).mds e1 all = 1
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.dfqptp", "id": "compute-0.dfqptp"} v 0) v1
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dfqptp", "id": "compute-0.dfqptp"}]: dispatch
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: balancer
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: crash
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [balancer INFO root] Starting
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: devicehealth
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: log_channel(cluster) log [INF] : Manager daemon compute-0.dfqptp is now available
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [devicehealth INFO root] Starting
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:19:09
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [balancer INFO root] No pools available
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: iostat
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: nfs
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: orchestrator
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: pg_autoscaler
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: progress
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [progress INFO root] Loading...
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [progress INFO root] No stored events to load
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [progress INFO root] Loaded [] historic events
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [progress INFO root] Loaded OSDMap, ready.
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] recovery thread starting
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] starting setup
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: rbd_support
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: restful
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [restful INFO root] server_addr: :: server_port: 8003
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dfqptp/mirror_snapshot_schedule"} v 0) v1
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dfqptp/mirror_snapshot_schedule"}]: dispatch
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [restful WARNING root] server not running: no certificate configured
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: status
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: telemetry
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] PerfHandler: starting
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TaskHandler: starting
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dfqptp/trash_purge_schedule"} v 0) v1
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dfqptp/trash_purge_schedule"}]: dispatch
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] setup complete
Nov 24 13:19:09 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: volumes
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Nov 24 13:19:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:10 np0005533938 ceph-mon[74927]: Activating manager daemon compute-0.dfqptp
Nov 24 13:19:10 np0005533938 ceph-mon[74927]: Manager daemon compute-0.dfqptp is now available
Nov 24 13:19:10 np0005533938 ceph-mon[74927]: from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dfqptp/mirror_snapshot_schedule"}]: dispatch
Nov 24 13:19:10 np0005533938 ceph-mon[74927]: from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dfqptp/trash_purge_schedule"}]: dispatch
Nov 24 13:19:10 np0005533938 ceph-mon[74927]: from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:10 np0005533938 ceph-mon[74927]: from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:10 np0005533938 ceph-mon[74927]: from='mgr.14102 192.168.122.100:0/1058517204' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:10 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.dfqptp(active, since 1.02358s)
Nov 24 13:19:11 np0005533938 podman[75777]: 2025-11-24 18:19:11.203663863 +0000 UTC m=+0.048251087 container create 52b662b0657de1b3508652588350165f56662564c26bc41e27e1fc68515c8437 (image=quay.io/ceph/ceph:v18, name=dazzling_joliot, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 13:19:11 np0005533938 systemd[1]: Started libpod-conmon-52b662b0657de1b3508652588350165f56662564c26bc41e27e1fc68515c8437.scope.
Nov 24 13:19:11 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:11 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40b2c0d8de39642fa263005e93be1650bf70b7fc43240c4a1e52772bdc0fb6dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:11 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40b2c0d8de39642fa263005e93be1650bf70b7fc43240c4a1e52772bdc0fb6dd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:11 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40b2c0d8de39642fa263005e93be1650bf70b7fc43240c4a1e52772bdc0fb6dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:11 np0005533938 podman[75777]: 2025-11-24 18:19:11.178607242 +0000 UTC m=+0.023194576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:11 np0005533938 podman[75777]: 2025-11-24 18:19:11.289669466 +0000 UTC m=+0.134256770 container init 52b662b0657de1b3508652588350165f56662564c26bc41e27e1fc68515c8437 (image=quay.io/ceph/ceph:v18, name=dazzling_joliot, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 13:19:11 np0005533938 podman[75777]: 2025-11-24 18:19:11.295262095 +0000 UTC m=+0.139849329 container start 52b662b0657de1b3508652588350165f56662564c26bc41e27e1fc68515c8437 (image=quay.io/ceph/ceph:v18, name=dazzling_joliot, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 13:19:11 np0005533938 podman[75777]: 2025-11-24 18:19:11.299638224 +0000 UTC m=+0.144225508 container attach 52b662b0657de1b3508652588350165f56662564c26bc41e27e1fc68515c8437 (image=quay.io/ceph/ceph:v18, name=dazzling_joliot, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:19:11 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 24 13:19:11 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3201577320' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]: 
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]: {
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:    "fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:    "health": {
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "status": "HEALTH_OK",
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "checks": {},
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "mutes": []
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:    },
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:    "election_epoch": 5,
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:    "quorum": [
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        0
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:    ],
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:    "quorum_names": [
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "compute-0"
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:    ],
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:    "quorum_age": 24,
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:    "monmap": {
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "epoch": 1,
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "min_mon_release_name": "reef",
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "num_mons": 1
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:    },
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:    "osdmap": {
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "epoch": 1,
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "num_osds": 0,
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "num_up_osds": 0,
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "osd_up_since": 0,
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "num_in_osds": 0,
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "osd_in_since": 0,
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "num_remapped_pgs": 0
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:    },
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:    "pgmap": {
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "pgs_by_state": [],
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "num_pgs": 0,
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "num_pools": 0,
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "num_objects": 0,
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "data_bytes": 0,
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "bytes_used": 0,
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "bytes_avail": 0,
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "bytes_total": 0
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:    },
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:    "fsmap": {
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "epoch": 1,
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "by_rank": [],
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "up:standby": 0
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:    },
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:    "mgrmap": {
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "available": true,
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "num_standbys": 0,
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "modules": [
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:            "iostat",
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:            "nfs",
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:            "restful"
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        ],
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "services": {}
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:    },
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:    "servicemap": {
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "epoch": 1,
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "modified": "2025-11-24T18:18:44.978620+0000",
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:        "services": {}
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:    },
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]:    "progress_events": {}
Nov 24 13:19:11 np0005533938 dazzling_joliot[75794]: }
Nov 24 13:19:11 np0005533938 systemd[1]: libpod-52b662b0657de1b3508652588350165f56662564c26bc41e27e1fc68515c8437.scope: Deactivated successfully.
Nov 24 13:19:11 np0005533938 ceph-mgr[75218]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 13:19:11 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.dfqptp(active, since 2s)
Nov 24 13:19:11 np0005533938 podman[75820]: 2025-11-24 18:19:11.959533811 +0000 UTC m=+0.032170959 container died 52b662b0657de1b3508652588350165f56662564c26bc41e27e1fc68515c8437 (image=quay.io/ceph/ceph:v18, name=dazzling_joliot, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:19:11 np0005533938 systemd[1]: var-lib-containers-storage-overlay-40b2c0d8de39642fa263005e93be1650bf70b7fc43240c4a1e52772bdc0fb6dd-merged.mount: Deactivated successfully.
Nov 24 13:19:12 np0005533938 podman[75820]: 2025-11-24 18:19:12.000833305 +0000 UTC m=+0.073470363 container remove 52b662b0657de1b3508652588350165f56662564c26bc41e27e1fc68515c8437 (image=quay.io/ceph/ceph:v18, name=dazzling_joliot, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:19:12 np0005533938 systemd[1]: libpod-conmon-52b662b0657de1b3508652588350165f56662564c26bc41e27e1fc68515c8437.scope: Deactivated successfully.
Nov 24 13:19:12 np0005533938 podman[75836]: 2025-11-24 18:19:12.094884838 +0000 UTC m=+0.056188115 container create c171c0c312a9e9a2173e422759e17927710e1a24c3758b3a877e11a23cc4a8a5 (image=quay.io/ceph/ceph:v18, name=interesting_pare, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:19:12 np0005533938 systemd[1]: Started libpod-conmon-c171c0c312a9e9a2173e422759e17927710e1a24c3758b3a877e11a23cc4a8a5.scope.
Nov 24 13:19:12 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:12 np0005533938 podman[75836]: 2025-11-24 18:19:12.070619946 +0000 UTC m=+0.031923243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:12 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64363857c0abca10fb1fa43f8ab32cc62643c58aa5adc7234c327bd13f5fbc2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:12 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64363857c0abca10fb1fa43f8ab32cc62643c58aa5adc7234c327bd13f5fbc2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:12 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64363857c0abca10fb1fa43f8ab32cc62643c58aa5adc7234c327bd13f5fbc2b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:12 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64363857c0abca10fb1fa43f8ab32cc62643c58aa5adc7234c327bd13f5fbc2b/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:12 np0005533938 podman[75836]: 2025-11-24 18:19:12.185141545 +0000 UTC m=+0.146444872 container init c171c0c312a9e9a2173e422759e17927710e1a24c3758b3a877e11a23cc4a8a5 (image=quay.io/ceph/ceph:v18, name=interesting_pare, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:19:12 np0005533938 podman[75836]: 2025-11-24 18:19:12.192371225 +0000 UTC m=+0.153674482 container start c171c0c312a9e9a2173e422759e17927710e1a24c3758b3a877e11a23cc4a8a5 (image=quay.io/ceph/ceph:v18, name=interesting_pare, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 13:19:12 np0005533938 podman[75836]: 2025-11-24 18:19:12.197119753 +0000 UTC m=+0.158423100 container attach c171c0c312a9e9a2173e422759e17927710e1a24c3758b3a877e11a23cc4a8a5 (image=quay.io/ceph/ceph:v18, name=interesting_pare, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 24 13:19:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 24 13:19:12 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4208384906' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 24 13:19:12 np0005533938 systemd[1]: libpod-c171c0c312a9e9a2173e422759e17927710e1a24c3758b3a877e11a23cc4a8a5.scope: Deactivated successfully.
Nov 24 13:19:12 np0005533938 podman[75836]: 2025-11-24 18:19:12.767842318 +0000 UTC m=+0.729145565 container died c171c0c312a9e9a2173e422759e17927710e1a24c3758b3a877e11a23cc4a8a5 (image=quay.io/ceph/ceph:v18, name=interesting_pare, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 13:19:12 np0005533938 systemd[1]: var-lib-containers-storage-overlay-64363857c0abca10fb1fa43f8ab32cc62643c58aa5adc7234c327bd13f5fbc2b-merged.mount: Deactivated successfully.
Nov 24 13:19:12 np0005533938 podman[75836]: 2025-11-24 18:19:12.813329736 +0000 UTC m=+0.774632983 container remove c171c0c312a9e9a2173e422759e17927710e1a24c3758b3a877e11a23cc4a8a5 (image=quay.io/ceph/ceph:v18, name=interesting_pare, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:19:12 np0005533938 systemd[1]: libpod-conmon-c171c0c312a9e9a2173e422759e17927710e1a24c3758b3a877e11a23cc4a8a5.scope: Deactivated successfully.
Nov 24 13:19:12 np0005533938 podman[75892]: 2025-11-24 18:19:12.866268259 +0000 UTC m=+0.035546243 container create 2026305d2f8d212d814a37efa294ba71090eaa3cd73e3d08c5b57e362f152f6d (image=quay.io/ceph/ceph:v18, name=romantic_swirles, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 13:19:12 np0005533938 systemd[1]: Started libpod-conmon-2026305d2f8d212d814a37efa294ba71090eaa3cd73e3d08c5b57e362f152f6d.scope.
Nov 24 13:19:12 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:12 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b17fef3c193a71381bd813ab5cf7bcad2a6a1d0645b02622c816f0be19cc5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:12 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b17fef3c193a71381bd813ab5cf7bcad2a6a1d0645b02622c816f0be19cc5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:12 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b17fef3c193a71381bd813ab5cf7bcad2a6a1d0645b02622c816f0be19cc5f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:12 np0005533938 podman[75892]: 2025-11-24 18:19:12.933465636 +0000 UTC m=+0.102743700 container init 2026305d2f8d212d814a37efa294ba71090eaa3cd73e3d08c5b57e362f152f6d (image=quay.io/ceph/ceph:v18, name=romantic_swirles, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 13:19:12 np0005533938 podman[75892]: 2025-11-24 18:19:12.939812643 +0000 UTC m=+0.109090657 container start 2026305d2f8d212d814a37efa294ba71090eaa3cd73e3d08c5b57e362f152f6d (image=quay.io/ceph/ceph:v18, name=romantic_swirles, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 13:19:12 np0005533938 podman[75892]: 2025-11-24 18:19:12.943483014 +0000 UTC m=+0.112761008 container attach 2026305d2f8d212d814a37efa294ba71090eaa3cd73e3d08c5b57e362f152f6d (image=quay.io/ceph/ceph:v18, name=romantic_swirles, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:19:12 np0005533938 podman[75892]: 2025-11-24 18:19:12.850033666 +0000 UTC m=+0.019311670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:12 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/4208384906' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 24 13:19:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Nov 24 13:19:13 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2356186078' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 24 13:19:13 np0005533938 ceph-mgr[75218]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 13:19:13 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2356186078' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 24 13:19:13 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2356186078' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 24 13:19:13 np0005533938 ceph-mgr[75218]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 24 13:19:13 np0005533938 ceph-mgr[75218]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 24 13:19:13 np0005533938 ceph-mgr[75218]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 24 13:19:13 np0005533938 ceph-mgr[75218]: mgr respawn  1: '-n'
Nov 24 13:19:13 np0005533938 ceph-mgr[75218]: mgr respawn  2: 'mgr.compute-0.dfqptp'
Nov 24 13:19:13 np0005533938 ceph-mgr[75218]: mgr respawn  3: '-f'
Nov 24 13:19:13 np0005533938 ceph-mgr[75218]: mgr respawn  4: '--setuser'
Nov 24 13:19:13 np0005533938 ceph-mgr[75218]: mgr respawn  5: 'ceph'
Nov 24 13:19:13 np0005533938 ceph-mgr[75218]: mgr respawn  6: '--setgroup'
Nov 24 13:19:13 np0005533938 ceph-mgr[75218]: mgr respawn  7: 'ceph'
Nov 24 13:19:13 np0005533938 ceph-mgr[75218]: mgr respawn  8: '--default-log-to-file=false'
Nov 24 13:19:13 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.dfqptp(active, since 4s)
Nov 24 13:19:13 np0005533938 systemd[1]: libpod-2026305d2f8d212d814a37efa294ba71090eaa3cd73e3d08c5b57e362f152f6d.scope: Deactivated successfully.
Nov 24 13:19:13 np0005533938 podman[75892]: 2025-11-24 18:19:13.991327324 +0000 UTC m=+1.160605308 container died 2026305d2f8d212d814a37efa294ba71090eaa3cd73e3d08c5b57e362f152f6d (image=quay.io/ceph/ceph:v18, name=romantic_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 13:19:14 np0005533938 systemd[1]: var-lib-containers-storage-overlay-68b17fef3c193a71381bd813ab5cf7bcad2a6a1d0645b02622c816f0be19cc5f-merged.mount: Deactivated successfully.
Nov 24 13:19:14 np0005533938 podman[75892]: 2025-11-24 18:19:14.034276739 +0000 UTC m=+1.203554713 container remove 2026305d2f8d212d814a37efa294ba71090eaa3cd73e3d08c5b57e362f152f6d (image=quay.io/ceph/ceph:v18, name=romantic_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:19:14 np0005533938 systemd[1]: libpod-conmon-2026305d2f8d212d814a37efa294ba71090eaa3cd73e3d08c5b57e362f152f6d.scope: Deactivated successfully.
Nov 24 13:19:14 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: ignoring --setuser ceph since I am not root
Nov 24 13:19:14 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: ignoring --setgroup ceph since I am not root
Nov 24 13:19:14 np0005533938 ceph-mgr[75218]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 24 13:19:14 np0005533938 ceph-mgr[75218]: pidfile_write: ignore empty --pid-file
Nov 24 13:19:14 np0005533938 podman[75946]: 2025-11-24 18:19:14.120666581 +0000 UTC m=+0.060237255 container create c4c8ffc765f8ef1bc8cd4d7fbc6134179c6e715ab40c5a57b681d7cf171e2802 (image=quay.io/ceph/ceph:v18, name=flamboyant_mirzakhani, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 24 13:19:14 np0005533938 systemd[1]: Started libpod-conmon-c4c8ffc765f8ef1bc8cd4d7fbc6134179c6e715ab40c5a57b681d7cf171e2802.scope.
Nov 24 13:19:14 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'alerts'
Nov 24 13:19:14 np0005533938 podman[75946]: 2025-11-24 18:19:14.100232124 +0000 UTC m=+0.039802778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:14 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:14 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20cb8028bb67275ee77bfeb39bb003303ae97c5f326f0bf2246a3c9d44a775f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:14 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20cb8028bb67275ee77bfeb39bb003303ae97c5f326f0bf2246a3c9d44a775f6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:14 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20cb8028bb67275ee77bfeb39bb003303ae97c5f326f0bf2246a3c9d44a775f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:14 np0005533938 podman[75946]: 2025-11-24 18:19:14.356991953 +0000 UTC m=+0.296562667 container init c4c8ffc765f8ef1bc8cd4d7fbc6134179c6e715ab40c5a57b681d7cf171e2802 (image=quay.io/ceph/ceph:v18, name=flamboyant_mirzakhani, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:19:14 np0005533938 podman[75946]: 2025-11-24 18:19:14.36210083 +0000 UTC m=+0.301671494 container start c4c8ffc765f8ef1bc8cd4d7fbc6134179c6e715ab40c5a57b681d7cf171e2802 (image=quay.io/ceph/ceph:v18, name=flamboyant_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 13:19:14 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:14.492+0000 7f63da190140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 13:19:14 np0005533938 ceph-mgr[75218]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 13:19:14 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'balancer'
Nov 24 13:19:14 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:14.748+0000 7f63da190140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 13:19:14 np0005533938 ceph-mgr[75218]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 13:19:14 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'cephadm'
Nov 24 13:19:14 np0005533938 podman[75946]: 2025-11-24 18:19:14.796072863 +0000 UTC m=+0.735643497 container attach c4c8ffc765f8ef1bc8cd4d7fbc6134179c6e715ab40c5a57b681d7cf171e2802 (image=quay.io/ceph/ceph:v18, name=flamboyant_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:19:15 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 24 13:19:15 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2214343238' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 13:19:15 np0005533938 flamboyant_mirzakhani[75987]: {
Nov 24 13:19:15 np0005533938 flamboyant_mirzakhani[75987]:    "epoch": 5,
Nov 24 13:19:15 np0005533938 flamboyant_mirzakhani[75987]:    "available": true,
Nov 24 13:19:15 np0005533938 flamboyant_mirzakhani[75987]:    "active_name": "compute-0.dfqptp",
Nov 24 13:19:15 np0005533938 flamboyant_mirzakhani[75987]:    "num_standby": 0
Nov 24 13:19:15 np0005533938 flamboyant_mirzakhani[75987]: }
Nov 24 13:19:15 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2356186078' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 24 13:19:15 np0005533938 systemd[1]: libpod-c4c8ffc765f8ef1bc8cd4d7fbc6134179c6e715ab40c5a57b681d7cf171e2802.scope: Deactivated successfully.
Nov 24 13:19:15 np0005533938 conmon[75987]: conmon c4c8ffc765f8ef1bc8cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c4c8ffc765f8ef1bc8cd4d7fbc6134179c6e715ab40c5a57b681d7cf171e2802.scope/container/memory.events
Nov 24 13:19:15 np0005533938 podman[75946]: 2025-11-24 18:19:15.073560536 +0000 UTC m=+1.013131200 container died c4c8ffc765f8ef1bc8cd4d7fbc6134179c6e715ab40c5a57b681d7cf171e2802 (image=quay.io/ceph/ceph:v18, name=flamboyant_mirzakhani, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:19:15 np0005533938 systemd[1]: var-lib-containers-storage-overlay-20cb8028bb67275ee77bfeb39bb003303ae97c5f326f0bf2246a3c9d44a775f6-merged.mount: Deactivated successfully.
Nov 24 13:19:15 np0005533938 podman[75946]: 2025-11-24 18:19:15.638439516 +0000 UTC m=+1.578010190 container remove c4c8ffc765f8ef1bc8cd4d7fbc6134179c6e715ab40c5a57b681d7cf171e2802 (image=quay.io/ceph/ceph:v18, name=flamboyant_mirzakhani, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:19:15 np0005533938 systemd[1]: libpod-conmon-c4c8ffc765f8ef1bc8cd4d7fbc6134179c6e715ab40c5a57b681d7cf171e2802.scope: Deactivated successfully.
Nov 24 13:19:15 np0005533938 podman[76027]: 2025-11-24 18:19:15.728533 +0000 UTC m=+0.056572844 container create faa2fc1178624312db21ad328e02f4dd04a21a9a60372dde37537f05b8e68348 (image=quay.io/ceph/ceph:v18, name=kind_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 13:19:15 np0005533938 systemd[1]: Started libpod-conmon-faa2fc1178624312db21ad328e02f4dd04a21a9a60372dde37537f05b8e68348.scope.
Nov 24 13:19:15 np0005533938 podman[76027]: 2025-11-24 18:19:15.707913428 +0000 UTC m=+0.035953252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:15 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:15 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e10c2fad4c5182967e646963c4508b811071d37d9f08d88ef29d5ecec3f51427/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:15 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e10c2fad4c5182967e646963c4508b811071d37d9f08d88ef29d5ecec3f51427/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:15 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e10c2fad4c5182967e646963c4508b811071d37d9f08d88ef29d5ecec3f51427/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:15 np0005533938 podman[76027]: 2025-11-24 18:19:15.853804637 +0000 UTC m=+0.181844501 container init faa2fc1178624312db21ad328e02f4dd04a21a9a60372dde37537f05b8e68348 (image=quay.io/ceph/ceph:v18, name=kind_haibt, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:19:15 np0005533938 podman[76027]: 2025-11-24 18:19:15.863565449 +0000 UTC m=+0.191605283 container start faa2fc1178624312db21ad328e02f4dd04a21a9a60372dde37537f05b8e68348 (image=quay.io/ceph/ceph:v18, name=kind_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 13:19:15 np0005533938 podman[76027]: 2025-11-24 18:19:15.868712576 +0000 UTC m=+0.196752410 container attach faa2fc1178624312db21ad328e02f4dd04a21a9a60372dde37537f05b8e68348 (image=quay.io/ceph/ceph:v18, name=kind_haibt, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 13:19:16 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'crash'
Nov 24 13:19:17 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:17.081+0000 7f63da190140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 13:19:17 np0005533938 ceph-mgr[75218]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 13:19:17 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'dashboard'
Nov 24 13:19:18 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'devicehealth'
Nov 24 13:19:18 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:18.849+0000 7f63da190140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 13:19:18 np0005533938 ceph-mgr[75218]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 13:19:18 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'diskprediction_local'
Nov 24 13:19:19 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 24 13:19:19 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 24 13:19:19 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]:  from numpy import show_config as show_numpy_config
Nov 24 13:19:19 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:19.375+0000 7f63da190140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 13:19:19 np0005533938 ceph-mgr[75218]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 13:19:19 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'influx'
Nov 24 13:19:19 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:19.607+0000 7f63da190140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 13:19:19 np0005533938 ceph-mgr[75218]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 13:19:19 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'insights'
Nov 24 13:19:19 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'iostat'
Nov 24 13:19:20 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:20.088+0000 7f63da190140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 13:19:20 np0005533938 ceph-mgr[75218]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 24 13:19:20 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'k8sevents'
Nov 24 13:19:21 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'localpool'
Nov 24 13:19:22 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'mds_autoscaler'
Nov 24 13:19:22 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'mirroring'
Nov 24 13:19:23 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'nfs'
Nov 24 13:19:23 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:23.776+0000 7f63da190140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 13:19:23 np0005533938 ceph-mgr[75218]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 24 13:19:23 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'orchestrator'
Nov 24 13:19:24 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:24.485+0000 7f63da190140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 13:19:24 np0005533938 ceph-mgr[75218]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 24 13:19:24 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'osd_perf_query'
Nov 24 13:19:24 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:24.767+0000 7f63da190140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 13:19:24 np0005533938 ceph-mgr[75218]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 24 13:19:24 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'osd_support'
Nov 24 13:19:24 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:24.998+0000 7f63da190140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 13:19:25 np0005533938 ceph-mgr[75218]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 24 13:19:25 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'pg_autoscaler'
Nov 24 13:19:25 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:25.263+0000 7f63da190140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 13:19:25 np0005533938 ceph-mgr[75218]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 24 13:19:25 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'progress'
Nov 24 13:19:25 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:25.530+0000 7f63da190140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 13:19:25 np0005533938 ceph-mgr[75218]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 24 13:19:25 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'prometheus'
Nov 24 13:19:26 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:26.560+0000 7f63da190140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 13:19:26 np0005533938 ceph-mgr[75218]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 24 13:19:26 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'rbd_support'
Nov 24 13:19:26 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:26.865+0000 7f63da190140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 13:19:26 np0005533938 ceph-mgr[75218]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 24 13:19:26 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'restful'
Nov 24 13:19:27 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'rgw'
Nov 24 13:19:28 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:28.247+0000 7f63da190140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 13:19:28 np0005533938 ceph-mgr[75218]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 24 13:19:28 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'rook'
Nov 24 13:19:30 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:30.340+0000 7f63da190140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 13:19:30 np0005533938 ceph-mgr[75218]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 24 13:19:30 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'selftest'
Nov 24 13:19:30 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:30.625+0000 7f63da190140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 13:19:30 np0005533938 ceph-mgr[75218]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 24 13:19:30 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'snap_schedule'
Nov 24 13:19:30 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:30.907+0000 7f63da190140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 13:19:30 np0005533938 ceph-mgr[75218]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 24 13:19:30 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'stats'
Nov 24 13:19:31 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'status'
Nov 24 13:19:31 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:31.450+0000 7f63da190140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 13:19:31 np0005533938 ceph-mgr[75218]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 24 13:19:31 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'telegraf'
Nov 24 13:19:31 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:31.714+0000 7f63da190140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 13:19:31 np0005533938 ceph-mgr[75218]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 24 13:19:31 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'telemetry'
Nov 24 13:19:32 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:32.363+0000 7f63da190140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 13:19:32 np0005533938 ceph-mgr[75218]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 24 13:19:32 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'test_orchestrator'
Nov 24 13:19:33 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:33.044+0000 7f63da190140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 13:19:33 np0005533938 ceph-mgr[75218]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 24 13:19:33 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'volumes'
Nov 24 13:19:33 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:33.790+0000 7f63da190140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 13:19:33 np0005533938 ceph-mgr[75218]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 24 13:19:33 np0005533938 ceph-mgr[75218]: mgr[py] Loading python module 'zabbix'
Nov 24 13:19:34 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:19:34.064+0000 7f63da190140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: log_channel(cluster) log [INF] : Active manager daemon compute-0.dfqptp restarted
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.dfqptp
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: ms_deliver_dispatch: unhandled message 0x563ff52331e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: mgr handle_mgr_map Activating!
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: mgr handle_mgr_map I am now activating
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.dfqptp(active, starting, since 0.349364s)
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: Active manager daemon compute-0.dfqptp restarted
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: Activating manager daemon compute-0.dfqptp
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.dfqptp", "id": "compute-0.dfqptp"} v 0) v1
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dfqptp", "id": "compute-0.dfqptp"}]: dispatch
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).mds e1 all = 1
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: balancer
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Starting
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: log_channel(cluster) log [INF] : Manager daemon compute-0.dfqptp is now available
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:19:34
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] No pools available
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: cephadm
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: crash
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: devicehealth
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: iostat
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [devicehealth INFO root] Starting
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: nfs
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: orchestrator
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: pg_autoscaler
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: progress
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [progress INFO root] Loading...
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [progress INFO root] No stored events to load
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [progress INFO root] Loaded [] historic events
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [progress INFO root] Loaded OSDMap, ready.
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] recovery thread starting
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] starting setup
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: rbd_support
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: restful
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dfqptp/mirror_snapshot_schedule"} v 0) v1
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dfqptp/mirror_snapshot_schedule"}]: dispatch
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: status
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [restful INFO root] server_addr: :: server_port: 8003
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: telemetry
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] PerfHandler: starting
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TaskHandler: starting
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [restful WARNING root] server not running: no certificate configured
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dfqptp/trash_purge_schedule"} v 0) v1
Nov 24 13:19:34 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dfqptp/trash_purge_schedule"}]: dispatch
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] setup complete
Nov 24 13:19:34 np0005533938 ceph-mgr[75218]: mgr load Constructed class from module: volumes
Nov 24 13:19:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Nov 24 13:19:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Nov 24 13:19:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:35 np0005533938 ceph-mon[74927]: Manager daemon compute-0.dfqptp is now available
Nov 24 13:19:35 np0005533938 ceph-mon[74927]: Found migration_current of "None". Setting to last migration.
Nov 24 13:19:35 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:35 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:35 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dfqptp/mirror_snapshot_schedule"}]: dispatch
Nov 24 13:19:35 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dfqptp/trash_purge_schedule"}]: dispatch
Nov 24 13:19:35 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:35 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:35 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 24 13:19:35 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.dfqptp(active, since 1.42919s)
Nov 24 13:19:35 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 24 13:19:35 np0005533938 kind_haibt[76043]: {
Nov 24 13:19:35 np0005533938 kind_haibt[76043]:    "mgrmap_epoch": 7,
Nov 24 13:19:35 np0005533938 kind_haibt[76043]:    "initialized": true
Nov 24 13:19:35 np0005533938 kind_haibt[76043]: }
Nov 24 13:19:35 np0005533938 systemd[1]: libpod-faa2fc1178624312db21ad328e02f4dd04a21a9a60372dde37537f05b8e68348.scope: Deactivated successfully.
Nov 24 13:19:35 np0005533938 podman[76027]: 2025-11-24 18:19:35.520225687 +0000 UTC m=+19.848265491 container died faa2fc1178624312db21ad328e02f4dd04a21a9a60372dde37537f05b8e68348 (image=quay.io/ceph/ceph:v18, name=kind_haibt, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 13:19:35 np0005533938 systemd[1]: var-lib-containers-storage-overlay-e10c2fad4c5182967e646963c4508b811071d37d9f08d88ef29d5ecec3f51427-merged.mount: Deactivated successfully.
Nov 24 13:19:35 np0005533938 podman[76027]: 2025-11-24 18:19:35.629536007 +0000 UTC m=+19.957575821 container remove faa2fc1178624312db21ad328e02f4dd04a21a9a60372dde37537f05b8e68348 (image=quay.io/ceph/ceph:v18, name=kind_haibt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:19:35 np0005533938 systemd[1]: libpod-conmon-faa2fc1178624312db21ad328e02f4dd04a21a9a60372dde37537f05b8e68348.scope: Deactivated successfully.
Nov 24 13:19:35 np0005533938 podman[76202]: 2025-11-24 18:19:35.738848938 +0000 UTC m=+0.071133410 container create 2ba0f97702153dd7e3ffa694f04f0f3ff3dc1c6eee5650399ffae31e2f1ecd4c (image=quay.io/ceph/ceph:v18, name=intelligent_swartz, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:19:35 np0005533938 systemd[1]: Started libpod-conmon-2ba0f97702153dd7e3ffa694f04f0f3ff3dc1c6eee5650399ffae31e2f1ecd4c.scope.
Nov 24 13:19:35 np0005533938 podman[76202]: 2025-11-24 18:19:35.711371051 +0000 UTC m=+0.043655603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:35 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:35 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6812407cdc80a2951de00efcafae98d3f2ef7db825d23d36b0ca1f50439027e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:35 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6812407cdc80a2951de00efcafae98d3f2ef7db825d23d36b0ca1f50439027e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:35 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6812407cdc80a2951de00efcafae98d3f2ef7db825d23d36b0ca1f50439027e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:35 np0005533938 podman[76202]: 2025-11-24 18:19:35.858152025 +0000 UTC m=+0.190436537 container init 2ba0f97702153dd7e3ffa694f04f0f3ff3dc1c6eee5650399ffae31e2f1ecd4c (image=quay.io/ceph/ceph:v18, name=intelligent_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 13:19:35 np0005533938 podman[76202]: 2025-11-24 18:19:35.865022181 +0000 UTC m=+0.197306653 container start 2ba0f97702153dd7e3ffa694f04f0f3ff3dc1c6eee5650399ffae31e2f1ecd4c (image=quay.io/ceph/ceph:v18, name=intelligent_swartz, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:19:35 np0005533938 podman[76202]: 2025-11-24 18:19:35.874736091 +0000 UTC m=+0.207020603 container attach 2ba0f97702153dd7e3ffa694f04f0f3ff3dc1c6eee5650399ffae31e2f1ecd4c (image=quay.io/ceph/ceph:v18, name=intelligent_swartz, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:19:36 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:19:36 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Nov 24 13:19:36 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:36 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 24 13:19:36 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 13:19:36 np0005533938 ceph-mgr[75218]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 13:19:36 np0005533938 systemd[1]: libpod-2ba0f97702153dd7e3ffa694f04f0f3ff3dc1c6eee5650399ffae31e2f1ecd4c.scope: Deactivated successfully.
Nov 24 13:19:36 np0005533938 podman[76202]: 2025-11-24 18:19:36.462350568 +0000 UTC m=+0.794635030 container died 2ba0f97702153dd7e3ffa694f04f0f3ff3dc1c6eee5650399ffae31e2f1ecd4c (image=quay.io/ceph/ceph:v18, name=intelligent_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:19:36 np0005533938 systemd[1]: var-lib-containers-storage-overlay-d6812407cdc80a2951de00efcafae98d3f2ef7db825d23d36b0ca1f50439027e-merged.mount: Deactivated successfully.
Nov 24 13:19:36 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:36 np0005533938 podman[76202]: 2025-11-24 18:19:36.502402108 +0000 UTC m=+0.834686570 container remove 2ba0f97702153dd7e3ffa694f04f0f3ff3dc1c6eee5650399ffae31e2f1ecd4c (image=quay.io/ceph/ceph:v18, name=intelligent_swartz, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:19:36 np0005533938 systemd[1]: libpod-conmon-2ba0f97702153dd7e3ffa694f04f0f3ff3dc1c6eee5650399ffae31e2f1ecd4c.scope: Deactivated successfully.
Nov 24 13:19:36 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.dfqptp(active, since 2s)
Nov 24 13:19:36 np0005533938 ceph-mgr[75218]: [cephadm INFO cherrypy.error] [24/Nov/2025:18:19:36] ENGINE Bus STARTING
Nov 24 13:19:36 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : [24/Nov/2025:18:19:36] ENGINE Bus STARTING
Nov 24 13:19:36 np0005533938 podman[76255]: 2025-11-24 18:19:36.628249203 +0000 UTC m=+0.104453166 container create f399ace9ec70427cc31f846cf309ff923c3c7885f6392c81d686b4f1ae6ced2d (image=quay.io/ceph/ceph:v18, name=nostalgic_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 24 13:19:36 np0005533938 podman[76255]: 2025-11-24 18:19:36.548658507 +0000 UTC m=+0.024862480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:36 np0005533938 ceph-mgr[75218]: [cephadm INFO cherrypy.error] [24/Nov/2025:18:19:36] ENGINE Serving on http://192.168.122.100:8765
Nov 24 13:19:36 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : [24/Nov/2025:18:19:36] ENGINE Serving on http://192.168.122.100:8765
Nov 24 13:19:36 np0005533938 systemd[1]: Started libpod-conmon-f399ace9ec70427cc31f846cf309ff923c3c7885f6392c81d686b4f1ae6ced2d.scope.
Nov 24 13:19:36 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:36 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4388d472766a5fb4b5fccb512a0127dd8f20f7025e523751d03fb262ffdaad85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:36 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4388d472766a5fb4b5fccb512a0127dd8f20f7025e523751d03fb262ffdaad85/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:36 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4388d472766a5fb4b5fccb512a0127dd8f20f7025e523751d03fb262ffdaad85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:36 np0005533938 ceph-mgr[75218]: [cephadm INFO cherrypy.error] [24/Nov/2025:18:19:36] ENGINE Serving on https://192.168.122.100:7150
Nov 24 13:19:36 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : [24/Nov/2025:18:19:36] ENGINE Serving on https://192.168.122.100:7150
Nov 24 13:19:36 np0005533938 ceph-mgr[75218]: [cephadm INFO cherrypy.error] [24/Nov/2025:18:19:36] ENGINE Bus STARTED
Nov 24 13:19:36 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : [24/Nov/2025:18:19:36] ENGINE Bus STARTED
Nov 24 13:19:36 np0005533938 ceph-mgr[75218]: [cephadm INFO cherrypy.error] [24/Nov/2025:18:19:36] ENGINE Client ('192.168.122.100', 47586) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 13:19:36 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 24 13:19:36 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : [24/Nov/2025:18:19:36] ENGINE Client ('192.168.122.100', 47586) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 13:19:36 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 13:19:36 np0005533938 podman[76255]: 2025-11-24 18:19:36.823865152 +0000 UTC m=+0.300069115 container init f399ace9ec70427cc31f846cf309ff923c3c7885f6392c81d686b4f1ae6ced2d (image=quay.io/ceph/ceph:v18, name=nostalgic_goldstine, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:19:36 np0005533938 podman[76255]: 2025-11-24 18:19:36.828160843 +0000 UTC m=+0.304364796 container start f399ace9ec70427cc31f846cf309ff923c3c7885f6392c81d686b4f1ae6ced2d (image=quay.io/ceph/ceph:v18, name=nostalgic_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 13:19:36 np0005533938 podman[76255]: 2025-11-24 18:19:36.915446197 +0000 UTC m=+0.391650150 container attach f399ace9ec70427cc31f846cf309ff923c3c7885f6392c81d686b4f1ae6ced2d (image=quay.io/ceph/ceph:v18, name=nostalgic_goldstine, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 13:19:37 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:19:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Nov 24 13:19:37 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:37 np0005533938 ceph-mgr[75218]: [cephadm INFO root] Set ssh ssh_user
Nov 24 13:19:37 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 24 13:19:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Nov 24 13:19:37 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:37 np0005533938 ceph-mgr[75218]: [cephadm INFO root] Set ssh ssh_config
Nov 24 13:19:37 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 24 13:19:37 np0005533938 ceph-mgr[75218]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 24 13:19:37 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 24 13:19:37 np0005533938 nostalgic_goldstine[76294]: ssh user set to ceph-admin. sudo will be used
Nov 24 13:19:37 np0005533938 systemd[1]: libpod-f399ace9ec70427cc31f846cf309ff923c3c7885f6392c81d686b4f1ae6ced2d.scope: Deactivated successfully.
Nov 24 13:19:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019923644 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:19:37 np0005533938 podman[76322]: 2025-11-24 18:19:37.468078813 +0000 UTC m=+0.026633186 container died f399ace9ec70427cc31f846cf309ff923c3c7885f6392c81d686b4f1ae6ced2d (image=quay.io/ceph/ceph:v18, name=nostalgic_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 24 13:19:37 np0005533938 ceph-mon[74927]: [24/Nov/2025:18:19:36] ENGINE Bus STARTING
Nov 24 13:19:37 np0005533938 ceph-mon[74927]: [24/Nov/2025:18:19:36] ENGINE Serving on http://192.168.122.100:8765
Nov 24 13:19:37 np0005533938 ceph-mon[74927]: [24/Nov/2025:18:19:36] ENGINE Serving on https://192.168.122.100:7150
Nov 24 13:19:37 np0005533938 ceph-mon[74927]: [24/Nov/2025:18:19:36] ENGINE Bus STARTED
Nov 24 13:19:37 np0005533938 ceph-mon[74927]: [24/Nov/2025:18:19:36] ENGINE Client ('192.168.122.100', 47586) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 24 13:19:37 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:37 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:37 np0005533938 systemd[1]: var-lib-containers-storage-overlay-4388d472766a5fb4b5fccb512a0127dd8f20f7025e523751d03fb262ffdaad85-merged.mount: Deactivated successfully.
Nov 24 13:19:37 np0005533938 podman[76322]: 2025-11-24 18:19:37.68614542 +0000 UTC m=+0.244699723 container remove f399ace9ec70427cc31f846cf309ff923c3c7885f6392c81d686b4f1ae6ced2d (image=quay.io/ceph/ceph:v18, name=nostalgic_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 13:19:37 np0005533938 systemd[1]: libpod-conmon-f399ace9ec70427cc31f846cf309ff923c3c7885f6392c81d686b4f1ae6ced2d.scope: Deactivated successfully.
Nov 24 13:19:37 np0005533938 podman[76338]: 2025-11-24 18:19:37.783468602 +0000 UTC m=+0.059990644 container create 7f22f8b947ebccabb8c867387309fe88fbaa0223efacf2c049719803eebd5d52 (image=quay.io/ceph/ceph:v18, name=pensive_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 13:19:37 np0005533938 systemd[1]: Started libpod-conmon-7f22f8b947ebccabb8c867387309fe88fbaa0223efacf2c049719803eebd5d52.scope.
Nov 24 13:19:37 np0005533938 podman[76338]: 2025-11-24 18:19:37.75658544 +0000 UTC m=+0.033107502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:37 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:37 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5f959a7dc88831e2783326b51c05482402b3e036bd516caab9692fa11ce098/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:37 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5f959a7dc88831e2783326b51c05482402b3e036bd516caab9692fa11ce098/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:37 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5f959a7dc88831e2783326b51c05482402b3e036bd516caab9692fa11ce098/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:37 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5f959a7dc88831e2783326b51c05482402b3e036bd516caab9692fa11ce098/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:37 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5f959a7dc88831e2783326b51c05482402b3e036bd516caab9692fa11ce098/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:37 np0005533938 podman[76338]: 2025-11-24 18:19:37.875260861 +0000 UTC m=+0.151782933 container init 7f22f8b947ebccabb8c867387309fe88fbaa0223efacf2c049719803eebd5d52 (image=quay.io/ceph/ceph:v18, name=pensive_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 13:19:37 np0005533938 podman[76338]: 2025-11-24 18:19:37.884594981 +0000 UTC m=+0.161117023 container start 7f22f8b947ebccabb8c867387309fe88fbaa0223efacf2c049719803eebd5d52 (image=quay.io/ceph/ceph:v18, name=pensive_hofstadter, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:19:37 np0005533938 podman[76338]: 2025-11-24 18:19:37.899761031 +0000 UTC m=+0.176283293 container attach 7f22f8b947ebccabb8c867387309fe88fbaa0223efacf2c049719803eebd5d52 (image=quay.io/ceph/ceph:v18, name=pensive_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 13:19:38 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:19:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Nov 24 13:19:38 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:38 np0005533938 ceph-mgr[75218]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 24 13:19:38 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 24 13:19:38 np0005533938 ceph-mgr[75218]: [cephadm INFO root] Set ssh private key
Nov 24 13:19:38 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 24 13:19:38 np0005533938 systemd[1]: libpod-7f22f8b947ebccabb8c867387309fe88fbaa0223efacf2c049719803eebd5d52.scope: Deactivated successfully.
Nov 24 13:19:38 np0005533938 podman[76338]: 2025-11-24 18:19:38.452852151 +0000 UTC m=+0.729374223 container died 7f22f8b947ebccabb8c867387309fe88fbaa0223efacf2c049719803eebd5d52 (image=quay.io/ceph/ceph:v18, name=pensive_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 13:19:38 np0005533938 ceph-mgr[75218]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 13:19:38 np0005533938 systemd[1]: var-lib-containers-storage-overlay-1f5f959a7dc88831e2783326b51c05482402b3e036bd516caab9692fa11ce098-merged.mount: Deactivated successfully.
Nov 24 13:19:38 np0005533938 podman[76338]: 2025-11-24 18:19:38.542259349 +0000 UTC m=+0.818781381 container remove 7f22f8b947ebccabb8c867387309fe88fbaa0223efacf2c049719803eebd5d52 (image=quay.io/ceph/ceph:v18, name=pensive_hofstadter, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 13:19:38 np0005533938 systemd[1]: libpod-conmon-7f22f8b947ebccabb8c867387309fe88fbaa0223efacf2c049719803eebd5d52.scope: Deactivated successfully.
Nov 24 13:19:38 np0005533938 ceph-mon[74927]: Set ssh ssh_user
Nov 24 13:19:38 np0005533938 ceph-mon[74927]: Set ssh ssh_config
Nov 24 13:19:38 np0005533938 ceph-mon[74927]: ssh user set to ceph-admin. sudo will be used
Nov 24 13:19:38 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:38 np0005533938 podman[76392]: 2025-11-24 18:19:38.657884442 +0000 UTC m=+0.082819170 container create 6eecba045b8303400ec58207ac05ae5e1404dd37fa824583c1f4df715b5ffde4 (image=quay.io/ceph/ceph:v18, name=elegant_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:19:38 np0005533938 systemd[1]: Started libpod-conmon-6eecba045b8303400ec58207ac05ae5e1404dd37fa824583c1f4df715b5ffde4.scope.
Nov 24 13:19:38 np0005533938 podman[76392]: 2025-11-24 18:19:38.618641743 +0000 UTC m=+0.043576551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:38 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40a7b6f5991e6db7ccfd6dd8450addf11d6acc89611c01c7e08958a506bf8190/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40a7b6f5991e6db7ccfd6dd8450addf11d6acc89611c01c7e08958a506bf8190/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40a7b6f5991e6db7ccfd6dd8450addf11d6acc89611c01c7e08958a506bf8190/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40a7b6f5991e6db7ccfd6dd8450addf11d6acc89611c01c7e08958a506bf8190/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40a7b6f5991e6db7ccfd6dd8450addf11d6acc89611c01c7e08958a506bf8190/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:38 np0005533938 podman[76392]: 2025-11-24 18:19:38.76359799 +0000 UTC m=+0.188532748 container init 6eecba045b8303400ec58207ac05ae5e1404dd37fa824583c1f4df715b5ffde4 (image=quay.io/ceph/ceph:v18, name=elegant_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 13:19:38 np0005533938 podman[76392]: 2025-11-24 18:19:38.777038335 +0000 UTC m=+0.201973063 container start 6eecba045b8303400ec58207ac05ae5e1404dd37fa824583c1f4df715b5ffde4 (image=quay.io/ceph/ceph:v18, name=elegant_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:19:38 np0005533938 podman[76392]: 2025-11-24 18:19:38.813116373 +0000 UTC m=+0.238051131 container attach 6eecba045b8303400ec58207ac05ae5e1404dd37fa824583c1f4df715b5ffde4 (image=quay.io/ceph/ceph:v18, name=elegant_thompson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 13:19:39 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:19:39 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Nov 24 13:19:39 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:39 np0005533938 ceph-mgr[75218]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 24 13:19:39 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 24 13:19:39 np0005533938 systemd[1]: libpod-6eecba045b8303400ec58207ac05ae5e1404dd37fa824583c1f4df715b5ffde4.scope: Deactivated successfully.
Nov 24 13:19:39 np0005533938 podman[76392]: 2025-11-24 18:19:39.380402257 +0000 UTC m=+0.805336995 container died 6eecba045b8303400ec58207ac05ae5e1404dd37fa824583c1f4df715b5ffde4 (image=quay.io/ceph/ceph:v18, name=elegant_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:19:39 np0005533938 systemd[1]: var-lib-containers-storage-overlay-40a7b6f5991e6db7ccfd6dd8450addf11d6acc89611c01c7e08958a506bf8190-merged.mount: Deactivated successfully.
Nov 24 13:19:39 np0005533938 podman[76392]: 2025-11-24 18:19:39.42212212 +0000 UTC m=+0.847056838 container remove 6eecba045b8303400ec58207ac05ae5e1404dd37fa824583c1f4df715b5ffde4 (image=quay.io/ceph/ceph:v18, name=elegant_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:19:39 np0005533938 systemd[1]: libpod-conmon-6eecba045b8303400ec58207ac05ae5e1404dd37fa824583c1f4df715b5ffde4.scope: Deactivated successfully.
Nov 24 13:19:39 np0005533938 podman[76448]: 2025-11-24 18:19:39.49100562 +0000 UTC m=+0.047043710 container create b105046f208ef1e133dad7ea43dacddd577f07c8ce07351ecf4f19c1792d9cce (image=quay.io/ceph/ceph:v18, name=xenodochial_kowalevski, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 13:19:39 np0005533938 systemd[1]: Started libpod-conmon-b105046f208ef1e133dad7ea43dacddd577f07c8ce07351ecf4f19c1792d9cce.scope.
Nov 24 13:19:39 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:39 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67277e9e3a1e3eae095f1d48617c881a7f3291f07c84c85b1cb54032e68f812/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:39 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67277e9e3a1e3eae095f1d48617c881a7f3291f07c84c85b1cb54032e68f812/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:39 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67277e9e3a1e3eae095f1d48617c881a7f3291f07c84c85b1cb54032e68f812/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:39 np0005533938 podman[76448]: 2025-11-24 18:19:39.471496699 +0000 UTC m=+0.027534839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:39 np0005533938 podman[76448]: 2025-11-24 18:19:39.578544801 +0000 UTC m=+0.134582991 container init b105046f208ef1e133dad7ea43dacddd577f07c8ce07351ecf4f19c1792d9cce (image=quay.io/ceph/ceph:v18, name=xenodochial_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:19:39 np0005533938 ceph-mon[74927]: Set ssh ssh_identity_key
Nov 24 13:19:39 np0005533938 ceph-mon[74927]: Set ssh private key
Nov 24 13:19:39 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:39 np0005533938 podman[76448]: 2025-11-24 18:19:39.589438271 +0000 UTC m=+0.145476401 container start b105046f208ef1e133dad7ea43dacddd577f07c8ce07351ecf4f19c1792d9cce (image=quay.io/ceph/ceph:v18, name=xenodochial_kowalevski, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:19:39 np0005533938 podman[76448]: 2025-11-24 18:19:39.593053244 +0000 UTC m=+0.149091394 container attach b105046f208ef1e133dad7ea43dacddd577f07c8ce07351ecf4f19c1792d9cce (image=quay.io/ceph/ceph:v18, name=xenodochial_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:19:40 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:19:40 np0005533938 xenodochial_kowalevski[76464]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDnwLNSoQ1U4EYT2Q7m2Q5aKUGikv/XawShYlOYI35cpqwzRLkTEvLJjiIInPBov/eo3EFPjZLZRWwJd5tZAhtM95ag00ajKW5pelqURnSlF7z1/HInd4lbjORN0QwA0gjOZZQi9kfU7tP/WdoAfZTzrAwq7PjCh7OBVUi1etEQC/A6BsmjMGLY6PvF6MaZ+Z6LGWzLcXfHv4ThRnT6eHoM3jv/bFkRBViEOTJlYlQ7B7TcWSZXO6bGXFl4HvSqhC+aZcB+owA2Pdf8RyhIyU2teCvqYpnt7LS3AzxAl/tDUVSayFoYbXYsGDMnb/5Jij9dZC1lB0SA9wN0yihcs5xxxLN9+M2njfsSyGQMbFluGmtjAYhFK9JZnB/xJWlMNIjNgFWc3art2/Ze4647fufJ77gn+G0O+duY/FV6mM8yPCIQjt7xbNhH14P7NMRZ9m2xWvh2GwBefsp7IEwmmWbUuAg/U2I3FmeZW4kkFK9I15FWokIqwulUQZ1yUN/x9xs= zuul@controller
Nov 24 13:19:40 np0005533938 systemd[1]: libpod-b105046f208ef1e133dad7ea43dacddd577f07c8ce07351ecf4f19c1792d9cce.scope: Deactivated successfully.
Nov 24 13:19:40 np0005533938 podman[76448]: 2025-11-24 18:19:40.113565496 +0000 UTC m=+0.669603606 container died b105046f208ef1e133dad7ea43dacddd577f07c8ce07351ecf4f19c1792d9cce (image=quay.io/ceph/ceph:v18, name=xenodochial_kowalevski, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 13:19:40 np0005533938 systemd[1]: var-lib-containers-storage-overlay-f67277e9e3a1e3eae095f1d48617c881a7f3291f07c84c85b1cb54032e68f812-merged.mount: Deactivated successfully.
Nov 24 13:19:40 np0005533938 podman[76448]: 2025-11-24 18:19:40.160730498 +0000 UTC m=+0.716768588 container remove b105046f208ef1e133dad7ea43dacddd577f07c8ce07351ecf4f19c1792d9cce (image=quay.io/ceph/ceph:v18, name=xenodochial_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 13:19:40 np0005533938 systemd[1]: libpod-conmon-b105046f208ef1e133dad7ea43dacddd577f07c8ce07351ecf4f19c1792d9cce.scope: Deactivated successfully.
Nov 24 13:19:40 np0005533938 podman[76502]: 2025-11-24 18:19:40.225644077 +0000 UTC m=+0.042701878 container create c207e83c64ad15427860f70a7517741ae98b14fa7c42562593c638e7dccce998 (image=quay.io/ceph/ceph:v18, name=flamboyant_ramanujan, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:19:40 np0005533938 systemd[1]: Started libpod-conmon-c207e83c64ad15427860f70a7517741ae98b14fa7c42562593c638e7dccce998.scope.
Nov 24 13:19:40 np0005533938 podman[76502]: 2025-11-24 18:19:40.207198413 +0000 UTC m=+0.024256254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:40 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:40 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83cd63f90b30827b6c65e8918765df44dedba81491d2f696c250ede48d4d1a6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:40 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83cd63f90b30827b6c65e8918765df44dedba81491d2f696c250ede48d4d1a6b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:40 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83cd63f90b30827b6c65e8918765df44dedba81491d2f696c250ede48d4d1a6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:40 np0005533938 podman[76502]: 2025-11-24 18:19:40.351637377 +0000 UTC m=+0.168695248 container init c207e83c64ad15427860f70a7517741ae98b14fa7c42562593c638e7dccce998 (image=quay.io/ceph/ceph:v18, name=flamboyant_ramanujan, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 24 13:19:40 np0005533938 podman[76502]: 2025-11-24 18:19:40.357006705 +0000 UTC m=+0.174064536 container start c207e83c64ad15427860f70a7517741ae98b14fa7c42562593c638e7dccce998 (image=quay.io/ceph/ceph:v18, name=flamboyant_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:19:40 np0005533938 podman[76502]: 2025-11-24 18:19:40.384673476 +0000 UTC m=+0.201731277 container attach c207e83c64ad15427860f70a7517741ae98b14fa7c42562593c638e7dccce998 (image=quay.io/ceph/ceph:v18, name=flamboyant_ramanujan, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 13:19:40 np0005533938 ceph-mgr[75218]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 13:19:40 np0005533938 ceph-mon[74927]: Set ssh ssh_identity_pub
Nov 24 13:19:40 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:19:41 np0005533938 systemd[1]: Created slice User Slice of UID 42477.
Nov 24 13:19:41 np0005533938 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 24 13:19:41 np0005533938 systemd-logind[822]: New session 20 of user ceph-admin.
Nov 24 13:19:41 np0005533938 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 24 13:19:41 np0005533938 systemd[1]: Starting User Manager for UID 42477...
Nov 24 13:19:41 np0005533938 systemd-logind[822]: New session 22 of user ceph-admin.
Nov 24 13:19:41 np0005533938 systemd[76548]: Queued start job for default target Main User Target.
Nov 24 13:19:41 np0005533938 systemd[76548]: Created slice User Application Slice.
Nov 24 13:19:41 np0005533938 systemd[76548]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 24 13:19:41 np0005533938 systemd[76548]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 13:19:41 np0005533938 systemd[76548]: Reached target Paths.
Nov 24 13:19:41 np0005533938 systemd[76548]: Reached target Timers.
Nov 24 13:19:41 np0005533938 systemd[76548]: Starting D-Bus User Message Bus Socket...
Nov 24 13:19:41 np0005533938 systemd[76548]: Starting Create User's Volatile Files and Directories...
Nov 24 13:19:41 np0005533938 systemd[76548]: Finished Create User's Volatile Files and Directories.
Nov 24 13:19:41 np0005533938 systemd[76548]: Listening on D-Bus User Message Bus Socket.
Nov 24 13:19:41 np0005533938 systemd[76548]: Reached target Sockets.
Nov 24 13:19:41 np0005533938 systemd[76548]: Reached target Basic System.
Nov 24 13:19:41 np0005533938 systemd[76548]: Reached target Main User Target.
Nov 24 13:19:41 np0005533938 systemd[76548]: Startup finished in 189ms.
Nov 24 13:19:41 np0005533938 systemd[1]: Started User Manager for UID 42477.
Nov 24 13:19:41 np0005533938 systemd[1]: Started Session 20 of User ceph-admin.
Nov 24 13:19:41 np0005533938 systemd[1]: Started Session 22 of User ceph-admin.
Nov 24 13:19:41 np0005533938 systemd-logind[822]: New session 23 of user ceph-admin.
Nov 24 13:19:41 np0005533938 systemd[1]: Started Session 23 of User ceph-admin.
Nov 24 13:19:42 np0005533938 systemd-logind[822]: New session 24 of user ceph-admin.
Nov 24 13:19:42 np0005533938 systemd[1]: Started Session 24 of User ceph-admin.
Nov 24 13:19:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053071 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:19:42 np0005533938 ceph-mgr[75218]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 13:19:42 np0005533938 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 24 13:19:42 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 24 13:19:42 np0005533938 systemd-logind[822]: New session 25 of user ceph-admin.
Nov 24 13:19:42 np0005533938 systemd[1]: Started Session 25 of User ceph-admin.
Nov 24 13:19:43 np0005533938 ceph-mon[74927]: Deploying cephadm binary to compute-0
Nov 24 13:19:43 np0005533938 systemd-logind[822]: New session 26 of user ceph-admin.
Nov 24 13:19:43 np0005533938 systemd[1]: Started Session 26 of User ceph-admin.
Nov 24 13:19:43 np0005533938 systemd-logind[822]: New session 27 of user ceph-admin.
Nov 24 13:19:43 np0005533938 systemd[1]: Started Session 27 of User ceph-admin.
Nov 24 13:19:44 np0005533938 systemd-logind[822]: New session 28 of user ceph-admin.
Nov 24 13:19:44 np0005533938 systemd[1]: Started Session 28 of User ceph-admin.
Nov 24 13:19:44 np0005533938 ceph-mgr[75218]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 13:19:44 np0005533938 systemd-logind[822]: New session 29 of user ceph-admin.
Nov 24 13:19:44 np0005533938 systemd[1]: Started Session 29 of User ceph-admin.
Nov 24 13:19:45 np0005533938 systemd-logind[822]: New session 30 of user ceph-admin.
Nov 24 13:19:45 np0005533938 systemd[1]: Started Session 30 of User ceph-admin.
Nov 24 13:19:45 np0005533938 systemd-logind[822]: New session 31 of user ceph-admin.
Nov 24 13:19:45 np0005533938 systemd[1]: Started Session 31 of User ceph-admin.
Nov 24 13:19:46 np0005533938 systemd-logind[822]: New session 32 of user ceph-admin.
Nov 24 13:19:46 np0005533938 systemd[1]: Started Session 32 of User ceph-admin.
Nov 24 13:19:46 np0005533938 ceph-mgr[75218]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 13:19:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 24 13:19:46 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:46 np0005533938 ceph-mgr[75218]: [cephadm INFO root] Added host compute-0
Nov 24 13:19:46 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 24 13:19:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 24 13:19:46 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 13:19:46 np0005533938 flamboyant_ramanujan[76518]: Added host 'compute-0' with addr '192.168.122.100'
Nov 24 13:19:46 np0005533938 systemd[1]: libpod-c207e83c64ad15427860f70a7517741ae98b14fa7c42562593c638e7dccce998.scope: Deactivated successfully.
Nov 24 13:19:46 np0005533938 podman[77170]: 2025-11-24 18:19:46.840117825 +0000 UTC m=+0.038950583 container died c207e83c64ad15427860f70a7517741ae98b14fa7c42562593c638e7dccce998 (image=quay.io/ceph/ceph:v18, name=flamboyant_ramanujan, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:19:46 np0005533938 systemd[1]: var-lib-containers-storage-overlay-83cd63f90b30827b6c65e8918765df44dedba81491d2f696c250ede48d4d1a6b-merged.mount: Deactivated successfully.
Nov 24 13:19:46 np0005533938 podman[77170]: 2025-11-24 18:19:46.884306351 +0000 UTC m=+0.083139059 container remove c207e83c64ad15427860f70a7517741ae98b14fa7c42562593c638e7dccce998 (image=quay.io/ceph/ceph:v18, name=flamboyant_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 13:19:46 np0005533938 systemd[1]: libpod-conmon-c207e83c64ad15427860f70a7517741ae98b14fa7c42562593c638e7dccce998.scope: Deactivated successfully.
Nov 24 13:19:46 np0005533938 podman[77217]: 2025-11-24 18:19:46.958677403 +0000 UTC m=+0.044060164 container create 57fd4a3861670ad14ac25eaf1dfde2bb94c738e14b79c09019ca2fdadd121613 (image=quay.io/ceph/ceph:v18, name=blissful_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:19:46 np0005533938 systemd[1]: Started libpod-conmon-57fd4a3861670ad14ac25eaf1dfde2bb94c738e14b79c09019ca2fdadd121613.scope.
Nov 24 13:19:47 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce5f3305c03bb9954ae78d5b9d467945b633a6b5b2a2fc8f94aa15713d1a737/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce5f3305c03bb9954ae78d5b9d467945b633a6b5b2a2fc8f94aa15713d1a737/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce5f3305c03bb9954ae78d5b9d467945b633a6b5b2a2fc8f94aa15713d1a737/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:47 np0005533938 podman[77217]: 2025-11-24 18:19:46.940072464 +0000 UTC m=+0.025455245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:47 np0005533938 podman[77217]: 2025-11-24 18:19:47.044283244 +0000 UTC m=+0.129666005 container init 57fd4a3861670ad14ac25eaf1dfde2bb94c738e14b79c09019ca2fdadd121613 (image=quay.io/ceph/ceph:v18, name=blissful_saha, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 24 13:19:47 np0005533938 podman[77217]: 2025-11-24 18:19:47.055099172 +0000 UTC m=+0.140481933 container start 57fd4a3861670ad14ac25eaf1dfde2bb94c738e14b79c09019ca2fdadd121613 (image=quay.io/ceph/ceph:v18, name=blissful_saha, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:19:47 np0005533938 podman[77217]: 2025-11-24 18:19:47.058207411 +0000 UTC m=+0.143590172 container attach 57fd4a3861670ad14ac25eaf1dfde2bb94c738e14b79c09019ca2fdadd121613 (image=quay.io/ceph/ceph:v18, name=blissful_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 24 13:19:47 np0005533938 podman[77318]: 2025-11-24 18:19:47.330175343 +0000 UTC m=+0.051921835 container create e70fde046f0eb1e9c64b91c1e2366a43d14ef14b207ba2e722165e0dc74675f5 (image=quay.io/ceph/ceph:v18, name=awesome_lovelace, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 13:19:47 np0005533938 systemd[1]: Started libpod-conmon-e70fde046f0eb1e9c64b91c1e2366a43d14ef14b207ba2e722165e0dc74675f5.scope.
Nov 24 13:19:47 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:47 np0005533938 podman[77318]: 2025-11-24 18:19:47.304539634 +0000 UTC m=+0.026286126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:47 np0005533938 podman[77318]: 2025-11-24 18:19:47.406378562 +0000 UTC m=+0.128125074 container init e70fde046f0eb1e9c64b91c1e2366a43d14ef14b207ba2e722165e0dc74675f5 (image=quay.io/ceph/ceph:v18, name=awesome_lovelace, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Nov 24 13:19:47 np0005533938 podman[77318]: 2025-11-24 18:19:47.413599128 +0000 UTC m=+0.135345620 container start e70fde046f0eb1e9c64b91c1e2366a43d14ef14b207ba2e722165e0dc74675f5 (image=quay.io/ceph/ceph:v18, name=awesome_lovelace, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:19:47 np0005533938 podman[77318]: 2025-11-24 18:19:47.416950814 +0000 UTC m=+0.138697316 container attach e70fde046f0eb1e9c64b91c1e2366a43d14ef14b207ba2e722165e0dc74675f5 (image=quay.io/ceph/ceph:v18, name=awesome_lovelace, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 13:19:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:19:47 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:19:47 np0005533938 ceph-mgr[75218]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 24 13:19:47 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 24 13:19:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 24 13:19:47 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:47 np0005533938 blissful_saha[77265]: Scheduled mon update...
Nov 24 13:19:47 np0005533938 systemd[1]: libpod-57fd4a3861670ad14ac25eaf1dfde2bb94c738e14b79c09019ca2fdadd121613.scope: Deactivated successfully.
Nov 24 13:19:47 np0005533938 podman[77217]: 2025-11-24 18:19:47.658419992 +0000 UTC m=+0.743802753 container died 57fd4a3861670ad14ac25eaf1dfde2bb94c738e14b79c09019ca2fdadd121613 (image=quay.io/ceph/ceph:v18, name=blissful_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 13:19:47 np0005533938 systemd[1]: var-lib-containers-storage-overlay-8ce5f3305c03bb9954ae78d5b9d467945b633a6b5b2a2fc8f94aa15713d1a737-merged.mount: Deactivated successfully.
Nov 24 13:19:47 np0005533938 podman[77217]: 2025-11-24 18:19:47.699335844 +0000 UTC m=+0.784718605 container remove 57fd4a3861670ad14ac25eaf1dfde2bb94c738e14b79c09019ca2fdadd121613 (image=quay.io/ceph/ceph:v18, name=blissful_saha, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 13:19:47 np0005533938 awesome_lovelace[77335]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 24 13:19:47 np0005533938 systemd[1]: libpod-conmon-57fd4a3861670ad14ac25eaf1dfde2bb94c738e14b79c09019ca2fdadd121613.scope: Deactivated successfully.
Nov 24 13:19:47 np0005533938 systemd[1]: libpod-e70fde046f0eb1e9c64b91c1e2366a43d14ef14b207ba2e722165e0dc74675f5.scope: Deactivated successfully.
Nov 24 13:19:47 np0005533938 podman[77318]: 2025-11-24 18:19:47.721595576 +0000 UTC m=+0.443342068 container died e70fde046f0eb1e9c64b91c1e2366a43d14ef14b207ba2e722165e0dc74675f5 (image=quay.io/ceph/ceph:v18, name=awesome_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:19:47 np0005533938 systemd[1]: var-lib-containers-storage-overlay-e01b3fd59ae997e6b6ee6f237d44f20dda4cb1424c30a4c40e090240d3355a68-merged.mount: Deactivated successfully.
Nov 24 13:19:47 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:47 np0005533938 ceph-mon[74927]: Added host compute-0
Nov 24 13:19:47 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:47 np0005533938 podman[77318]: 2025-11-24 18:19:47.770588996 +0000 UTC m=+0.492335488 container remove e70fde046f0eb1e9c64b91c1e2366a43d14ef14b207ba2e722165e0dc74675f5 (image=quay.io/ceph/ceph:v18, name=awesome_lovelace, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 13:19:47 np0005533938 systemd[1]: libpod-conmon-e70fde046f0eb1e9c64b91c1e2366a43d14ef14b207ba2e722165e0dc74675f5.scope: Deactivated successfully.
Nov 24 13:19:47 np0005533938 podman[77373]: 2025-11-24 18:19:47.793966147 +0000 UTC m=+0.068952314 container create ce247e1536e049c78708010198f4b3e3e7726b574f2ff49b2f8ec4529c4d88fe (image=quay.io/ceph/ceph:v18, name=awesome_cori, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 13:19:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Nov 24 13:19:47 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:47 np0005533938 systemd[1]: Started libpod-conmon-ce247e1536e049c78708010198f4b3e3e7726b574f2ff49b2f8ec4529c4d88fe.scope.
Nov 24 13:19:47 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:47 np0005533938 podman[77373]: 2025-11-24 18:19:47.772377412 +0000 UTC m=+0.047363609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9249677785a959dea2784ff02723449ab10a580f8f73bfb413e0048c0cc61a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9249677785a959dea2784ff02723449ab10a580f8f73bfb413e0048c0cc61a2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9249677785a959dea2784ff02723449ab10a580f8f73bfb413e0048c0cc61a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:47 np0005533938 podman[77373]: 2025-11-24 18:19:47.884406852 +0000 UTC m=+0.159393049 container init ce247e1536e049c78708010198f4b3e3e7726b574f2ff49b2f8ec4529c4d88fe (image=quay.io/ceph/ceph:v18, name=awesome_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 13:19:47 np0005533938 podman[77373]: 2025-11-24 18:19:47.890034627 +0000 UTC m=+0.165020804 container start ce247e1536e049c78708010198f4b3e3e7726b574f2ff49b2f8ec4529c4d88fe (image=quay.io/ceph/ceph:v18, name=awesome_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 13:19:47 np0005533938 podman[77373]: 2025-11-24 18:19:47.894908212 +0000 UTC m=+0.169894389 container attach ce247e1536e049c78708010198f4b3e3e7726b574f2ff49b2f8ec4529c4d88fe (image=quay.io/ceph/ceph:v18, name=awesome_cori, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 13:19:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:19:48 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:48 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:19:48 np0005533938 ceph-mgr[75218]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 24 13:19:48 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 24 13:19:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 24 13:19:48 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:48 np0005533938 awesome_cori[77407]: Scheduled mgr update...
Nov 24 13:19:48 np0005533938 systemd[1]: libpod-ce247e1536e049c78708010198f4b3e3e7726b574f2ff49b2f8ec4529c4d88fe.scope: Deactivated successfully.
Nov 24 13:19:48 np0005533938 podman[77373]: 2025-11-24 18:19:48.444423919 +0000 UTC m=+0.719410096 container died ce247e1536e049c78708010198f4b3e3e7726b574f2ff49b2f8ec4529c4d88fe (image=quay.io/ceph/ceph:v18, name=awesome_cori, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:19:48 np0005533938 ceph-mgr[75218]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 13:19:48 np0005533938 systemd[1]: var-lib-containers-storage-overlay-d9249677785a959dea2784ff02723449ab10a580f8f73bfb413e0048c0cc61a2-merged.mount: Deactivated successfully.
Nov 24 13:19:48 np0005533938 podman[77373]: 2025-11-24 18:19:48.479788238 +0000 UTC m=+0.754774415 container remove ce247e1536e049c78708010198f4b3e3e7726b574f2ff49b2f8ec4529c4d88fe (image=quay.io/ceph/ceph:v18, name=awesome_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 13:19:48 np0005533938 systemd[1]: libpod-conmon-ce247e1536e049c78708010198f4b3e3e7726b574f2ff49b2f8ec4529c4d88fe.scope: Deactivated successfully.
Nov 24 13:19:48 np0005533938 podman[77657]: 2025-11-24 18:19:48.533323384 +0000 UTC m=+0.035377170 container create 2b100e0067e4f581f65b833cc7f94a833eaf1c7812e04226f60c68fc0a3c67ec (image=quay.io/ceph/ceph:v18, name=awesome_cohen, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 13:19:48 np0005533938 systemd[1]: Started libpod-conmon-2b100e0067e4f581f65b833cc7f94a833eaf1c7812e04226f60c68fc0a3c67ec.scope.
Nov 24 13:19:48 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:48 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f68b85bafb1193e052ead1bf48e693045ec831a1d095b51466004768e9551a5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:48 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f68b85bafb1193e052ead1bf48e693045ec831a1d095b51466004768e9551a5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:48 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f68b85bafb1193e052ead1bf48e693045ec831a1d095b51466004768e9551a5a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:48 np0005533938 podman[77657]: 2025-11-24 18:19:48.597226387 +0000 UTC m=+0.099280253 container init 2b100e0067e4f581f65b833cc7f94a833eaf1c7812e04226f60c68fc0a3c67ec (image=quay.io/ceph/ceph:v18, name=awesome_cohen, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:19:48 np0005533938 podman[77657]: 2025-11-24 18:19:48.60394697 +0000 UTC m=+0.106000766 container start 2b100e0067e4f581f65b833cc7f94a833eaf1c7812e04226f60c68fc0a3c67ec (image=quay.io/ceph/ceph:v18, name=awesome_cohen, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:19:48 np0005533938 podman[77657]: 2025-11-24 18:19:48.607639165 +0000 UTC m=+0.109692991 container attach 2b100e0067e4f581f65b833cc7f94a833eaf1c7812e04226f60c68fc0a3c67ec (image=quay.io/ceph/ceph:v18, name=awesome_cohen, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 24 13:19:48 np0005533938 podman[77657]: 2025-11-24 18:19:48.5179964 +0000 UTC m=+0.020050206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:48 np0005533938 ceph-mon[74927]: Saving service mon spec with placement count:5
Nov 24 13:19:48 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:48 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:48 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:48 np0005533938 podman[77749]: 2025-11-24 18:19:48.967021504 +0000 UTC m=+0.048814426 container exec 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:19:49 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:19:49 np0005533938 ceph-mgr[75218]: [cephadm INFO root] Saving service crash spec with placement *
Nov 24 13:19:49 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 24 13:19:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 24 13:19:49 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:49 np0005533938 awesome_cohen[77674]: Scheduled crash update...
Nov 24 13:19:49 np0005533938 systemd[1]: libpod-2b100e0067e4f581f65b833cc7f94a833eaf1c7812e04226f60c68fc0a3c67ec.scope: Deactivated successfully.
Nov 24 13:19:49 np0005533938 podman[77657]: 2025-11-24 18:19:49.175044802 +0000 UTC m=+0.677098608 container died 2b100e0067e4f581f65b833cc7f94a833eaf1c7812e04226f60c68fc0a3c67ec (image=quay.io/ceph/ceph:v18, name=awesome_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:19:49 np0005533938 systemd[1]: var-lib-containers-storage-overlay-f68b85bafb1193e052ead1bf48e693045ec831a1d095b51466004768e9551a5a-merged.mount: Deactivated successfully.
Nov 24 13:19:49 np0005533938 podman[77657]: 2025-11-24 18:19:49.215007839 +0000 UTC m=+0.717061645 container remove 2b100e0067e4f581f65b833cc7f94a833eaf1c7812e04226f60c68fc0a3c67ec (image=quay.io/ceph/ceph:v18, name=awesome_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:19:49 np0005533938 systemd[1]: libpod-conmon-2b100e0067e4f581f65b833cc7f94a833eaf1c7812e04226f60c68fc0a3c67ec.scope: Deactivated successfully.
Nov 24 13:19:49 np0005533938 podman[77749]: 2025-11-24 18:19:49.250987854 +0000 UTC m=+0.332780756 container exec_died 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:19:49 np0005533938 podman[77803]: 2025-11-24 18:19:49.28739125 +0000 UTC m=+0.049164335 container create 6b5d59727d8936a695934a006bc24d79ec3654acce3f91be2534cf353e441162 (image=quay.io/ceph/ceph:v18, name=angry_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:19:49 np0005533938 systemd[1]: Started libpod-conmon-6b5d59727d8936a695934a006bc24d79ec3654acce3f91be2534cf353e441162.scope.
Nov 24 13:19:49 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:49 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1921e0a5c7c944d64d9f44713cbb8300270185e652e54d6e2895b5395bf2ce75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:49 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1921e0a5c7c944d64d9f44713cbb8300270185e652e54d6e2895b5395bf2ce75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:49 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1921e0a5c7c944d64d9f44713cbb8300270185e652e54d6e2895b5395bf2ce75/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:49 np0005533938 podman[77803]: 2025-11-24 18:19:49.363158248 +0000 UTC m=+0.124931343 container init 6b5d59727d8936a695934a006bc24d79ec3654acce3f91be2534cf353e441162 (image=quay.io/ceph/ceph:v18, name=angry_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:19:49 np0005533938 podman[77803]: 2025-11-24 18:19:49.270464385 +0000 UTC m=+0.032237490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:49 np0005533938 podman[77803]: 2025-11-24 18:19:49.370200399 +0000 UTC m=+0.131973484 container start 6b5d59727d8936a695934a006bc24d79ec3654acce3f91be2534cf353e441162 (image=quay.io/ceph/ceph:v18, name=angry_newton, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:19:49 np0005533938 podman[77803]: 2025-11-24 18:19:49.373660078 +0000 UTC m=+0.135433163 container attach 6b5d59727d8936a695934a006bc24d79ec3654acce3f91be2534cf353e441162 (image=quay.io/ceph/ceph:v18, name=angry_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:19:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:19:49 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:49 np0005533938 ceph-mon[74927]: Saving service mgr spec with placement count:2
Nov 24 13:19:49 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:49 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Nov 24 13:19:49 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3735356326' entity='client.admin' 
Nov 24 13:19:49 np0005533938 systemd[1]: libpod-6b5d59727d8936a695934a006bc24d79ec3654acce3f91be2534cf353e441162.scope: Deactivated successfully.
Nov 24 13:19:49 np0005533938 podman[77803]: 2025-11-24 18:19:49.919322567 +0000 UTC m=+0.681095692 container died 6b5d59727d8936a695934a006bc24d79ec3654acce3f91be2534cf353e441162 (image=quay.io/ceph/ceph:v18, name=angry_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 13:19:49 np0005533938 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77989 (sysctl)
Nov 24 13:19:49 np0005533938 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 24 13:19:49 np0005533938 systemd[1]: var-lib-containers-storage-overlay-1921e0a5c7c944d64d9f44713cbb8300270185e652e54d6e2895b5395bf2ce75-merged.mount: Deactivated successfully.
Nov 24 13:19:49 np0005533938 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 24 13:19:49 np0005533938 podman[77803]: 2025-11-24 18:19:49.97703256 +0000 UTC m=+0.738805645 container remove 6b5d59727d8936a695934a006bc24d79ec3654acce3f91be2534cf353e441162 (image=quay.io/ceph/ceph:v18, name=angry_newton, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:19:49 np0005533938 systemd[1]: libpod-conmon-6b5d59727d8936a695934a006bc24d79ec3654acce3f91be2534cf353e441162.scope: Deactivated successfully.
Nov 24 13:19:50 np0005533938 podman[78003]: 2025-11-24 18:19:50.058336281 +0000 UTC m=+0.058993718 container create ff62592166534391aee4b1c1b1f2c190e789bd13fde518b5e26e4e4827a8a122 (image=quay.io/ceph/ceph:v18, name=loving_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:19:50 np0005533938 systemd[1]: Started libpod-conmon-ff62592166534391aee4b1c1b1f2c190e789bd13fde518b5e26e4e4827a8a122.scope.
Nov 24 13:19:50 np0005533938 podman[78003]: 2025-11-24 18:19:50.029108629 +0000 UTC m=+0.029766126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:50 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:50 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a5a7721258c7137d615515bbc15a9b45d954bc757549d82a058125bc7c01429/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:50 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a5a7721258c7137d615515bbc15a9b45d954bc757549d82a058125bc7c01429/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:50 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a5a7721258c7137d615515bbc15a9b45d954bc757549d82a058125bc7c01429/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:50 np0005533938 podman[78003]: 2025-11-24 18:19:50.160289132 +0000 UTC m=+0.160946549 container init ff62592166534391aee4b1c1b1f2c190e789bd13fde518b5e26e4e4827a8a122 (image=quay.io/ceph/ceph:v18, name=loving_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:19:50 np0005533938 podman[78003]: 2025-11-24 18:19:50.168534154 +0000 UTC m=+0.169191591 container start ff62592166534391aee4b1c1b1f2c190e789bd13fde518b5e26e4e4827a8a122 (image=quay.io/ceph/ceph:v18, name=loving_taussig, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 13:19:50 np0005533938 podman[78003]: 2025-11-24 18:19:50.172341691 +0000 UTC m=+0.172999088 container attach ff62592166534391aee4b1c1b1f2c190e789bd13fde518b5e26e4e4827a8a122 (image=quay.io/ceph/ceph:v18, name=loving_taussig, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:19:50 np0005533938 ceph-mgr[75218]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 13:19:50 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:19:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Nov 24 13:19:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:50 np0005533938 systemd[1]: libpod-ff62592166534391aee4b1c1b1f2c190e789bd13fde518b5e26e4e4827a8a122.scope: Deactivated successfully.
Nov 24 13:19:50 np0005533938 podman[78003]: 2025-11-24 18:19:50.701374882 +0000 UTC m=+0.702032289 container died ff62592166534391aee4b1c1b1f2c190e789bd13fde518b5e26e4e4827a8a122 (image=quay.io/ceph/ceph:v18, name=loving_taussig, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:19:50 np0005533938 systemd[1]: var-lib-containers-storage-overlay-6a5a7721258c7137d615515bbc15a9b45d954bc757549d82a058125bc7c01429-merged.mount: Deactivated successfully.
Nov 24 13:19:50 np0005533938 podman[78003]: 2025-11-24 18:19:50.759592399 +0000 UTC m=+0.760249836 container remove ff62592166534391aee4b1c1b1f2c190e789bd13fde518b5e26e4e4827a8a122 (image=quay.io/ceph/ceph:v18, name=loving_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 13:19:50 np0005533938 systemd[1]: libpod-conmon-ff62592166534391aee4b1c1b1f2c190e789bd13fde518b5e26e4e4827a8a122.scope: Deactivated successfully.
Nov 24 13:19:50 np0005533938 podman[78177]: 2025-11-24 18:19:50.850572198 +0000 UTC m=+0.060619199 container create 6b913e7083679fb3c38c2845c7c6f3f52908c74722c534c83738a110d4eb3bf3 (image=quay.io/ceph/ceph:v18, name=epic_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:19:50 np0005533938 ceph-mon[74927]: Saving service crash spec with placement *
Nov 24 13:19:50 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/3735356326' entity='client.admin' 
Nov 24 13:19:50 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:50 np0005533938 systemd[1]: Started libpod-conmon-6b913e7083679fb3c38c2845c7c6f3f52908c74722c534c83738a110d4eb3bf3.scope.
Nov 24 13:19:50 np0005533938 podman[78177]: 2025-11-24 18:19:50.822180088 +0000 UTC m=+0.032227139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:50 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:50 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a49c494f8967391950032d2199b06d5832319b9e16ae1f6f7de9e4b29be3f38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:50 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a49c494f8967391950032d2199b06d5832319b9e16ae1f6f7de9e4b29be3f38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:50 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a49c494f8967391950032d2199b06d5832319b9e16ae1f6f7de9e4b29be3f38/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:50 np0005533938 podman[78177]: 2025-11-24 18:19:50.96035096 +0000 UTC m=+0.170398031 container init 6b913e7083679fb3c38c2845c7c6f3f52908c74722c534c83738a110d4eb3bf3 (image=quay.io/ceph/ceph:v18, name=epic_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 13:19:50 np0005533938 podman[78177]: 2025-11-24 18:19:50.9684895 +0000 UTC m=+0.178536511 container start 6b913e7083679fb3c38c2845c7c6f3f52908c74722c534c83738a110d4eb3bf3 (image=quay.io/ceph/ceph:v18, name=epic_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:19:50 np0005533938 podman[78177]: 2025-11-24 18:19:50.973030656 +0000 UTC m=+0.183077657 container attach 6b913e7083679fb3c38c2845c7c6f3f52908c74722c534c83738a110d4eb3bf3 (image=quay.io/ceph/ceph:v18, name=epic_swanson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 13:19:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:19:51 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:51 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:19:51 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 24 13:19:51 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:51 np0005533938 ceph-mgr[75218]: [cephadm INFO root] Added label _admin to host compute-0
Nov 24 13:19:51 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 24 13:19:51 np0005533938 epic_swanson[78206]: Added label _admin to host compute-0
Nov 24 13:19:51 np0005533938 podman[78177]: 2025-11-24 18:19:51.537772474 +0000 UTC m=+0.747819475 container died 6b913e7083679fb3c38c2845c7c6f3f52908c74722c534c83738a110d4eb3bf3 (image=quay.io/ceph/ceph:v18, name=epic_swanson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 13:19:51 np0005533938 systemd[1]: libpod-6b913e7083679fb3c38c2845c7c6f3f52908c74722c534c83738a110d4eb3bf3.scope: Deactivated successfully.
Nov 24 13:19:51 np0005533938 systemd[1]: var-lib-containers-storage-overlay-0a49c494f8967391950032d2199b06d5832319b9e16ae1f6f7de9e4b29be3f38-merged.mount: Deactivated successfully.
Nov 24 13:19:51 np0005533938 podman[78177]: 2025-11-24 18:19:51.611651884 +0000 UTC m=+0.821698845 container remove 6b913e7083679fb3c38c2845c7c6f3f52908c74722c534c83738a110d4eb3bf3 (image=quay.io/ceph/ceph:v18, name=epic_swanson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 24 13:19:51 np0005533938 systemd[1]: libpod-conmon-6b913e7083679fb3c38c2845c7c6f3f52908c74722c534c83738a110d4eb3bf3.scope: Deactivated successfully.
Nov 24 13:19:51 np0005533938 podman[78377]: 2025-11-24 18:19:51.681445098 +0000 UTC m=+0.046753303 container create f22006b2f1ed6e452bc338277cb6cb8c7f88b4bd0643164d7d922d3a22e105ca (image=quay.io/ceph/ceph:v18, name=youthful_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:19:51 np0005533938 systemd[1]: Started libpod-conmon-f22006b2f1ed6e452bc338277cb6cb8c7f88b4bd0643164d7d922d3a22e105ca.scope.
Nov 24 13:19:51 np0005533938 podman[78377]: 2025-11-24 18:19:51.657445801 +0000 UTC m=+0.022754026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:51 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:51 np0005533938 podman[78405]: 2025-11-24 18:19:51.759626748 +0000 UTC m=+0.045589953 container create f1bcbd1729cdba1323091035af3bebcce8f8e51322c7cb7b64bee586f59d2e5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:19:51 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee484dfdd6bbe71bb539a6240aced83d58e22072997bbda19c11884bd14baec1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:51 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee484dfdd6bbe71bb539a6240aced83d58e22072997bbda19c11884bd14baec1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:51 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee484dfdd6bbe71bb539a6240aced83d58e22072997bbda19c11884bd14baec1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:51 np0005533938 podman[78377]: 2025-11-24 18:19:51.774450619 +0000 UTC m=+0.139758864 container init f22006b2f1ed6e452bc338277cb6cb8c7f88b4bd0643164d7d922d3a22e105ca (image=quay.io/ceph/ceph:v18, name=youthful_tharp, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:19:51 np0005533938 podman[78377]: 2025-11-24 18:19:51.783587034 +0000 UTC m=+0.148895239 container start f22006b2f1ed6e452bc338277cb6cb8c7f88b4bd0643164d7d922d3a22e105ca (image=quay.io/ceph/ceph:v18, name=youthful_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 13:19:51 np0005533938 podman[78377]: 2025-11-24 18:19:51.788569112 +0000 UTC m=+0.153877357 container attach f22006b2f1ed6e452bc338277cb6cb8c7f88b4bd0643164d7d922d3a22e105ca (image=quay.io/ceph/ceph:v18, name=youthful_tharp, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 13:19:51 np0005533938 systemd[1]: Started libpod-conmon-f1bcbd1729cdba1323091035af3bebcce8f8e51322c7cb7b64bee586f59d2e5d.scope.
Nov 24 13:19:51 np0005533938 podman[78405]: 2025-11-24 18:19:51.734867331 +0000 UTC m=+0.020830526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:19:51 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:51 np0005533938 podman[78405]: 2025-11-24 18:19:51.856800586 +0000 UTC m=+0.142763791 container init f1bcbd1729cdba1323091035af3bebcce8f8e51322c7cb7b64bee586f59d2e5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 13:19:51 np0005533938 podman[78405]: 2025-11-24 18:19:51.862049601 +0000 UTC m=+0.148012776 container start f1bcbd1729cdba1323091035af3bebcce8f8e51322c7cb7b64bee586f59d2e5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_curie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:19:51 np0005533938 podman[78405]: 2025-11-24 18:19:51.865988942 +0000 UTC m=+0.151952167 container attach f1bcbd1729cdba1323091035af3bebcce8f8e51322c7cb7b64bee586f59d2e5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 13:19:51 np0005533938 admiring_curie[78428]: 167 167
Nov 24 13:19:51 np0005533938 systemd[1]: libpod-f1bcbd1729cdba1323091035af3bebcce8f8e51322c7cb7b64bee586f59d2e5d.scope: Deactivated successfully.
Nov 24 13:19:51 np0005533938 podman[78405]: 2025-11-24 18:19:51.868006834 +0000 UTC m=+0.153970009 container died f1bcbd1729cdba1323091035af3bebcce8f8e51322c7cb7b64bee586f59d2e5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_curie, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 13:19:51 np0005533938 systemd[1]: var-lib-containers-storage-overlay-01cb7676017884178bff14a19dbfbc3c1124e25c872722f06e49fe4c3ca9491a-merged.mount: Deactivated successfully.
Nov 24 13:19:51 np0005533938 podman[78405]: 2025-11-24 18:19:51.913533555 +0000 UTC m=+0.199496750 container remove f1bcbd1729cdba1323091035af3bebcce8f8e51322c7cb7b64bee586f59d2e5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_curie, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:19:51 np0005533938 systemd[1]: libpod-conmon-f1bcbd1729cdba1323091035af3bebcce8f8e51322c7cb7b64bee586f59d2e5d.scope: Deactivated successfully.
Nov 24 13:19:52 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:52 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Nov 24 13:19:52 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1867413685' entity='client.admin' 
Nov 24 13:19:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:19:52 np0005533938 ceph-mgr[75218]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 24 13:19:52 np0005533938 systemd[1]: libpod-f22006b2f1ed6e452bc338277cb6cb8c7f88b4bd0643164d7d922d3a22e105ca.scope: Deactivated successfully.
Nov 24 13:19:52 np0005533938 podman[78377]: 2025-11-24 18:19:52.477006681 +0000 UTC m=+0.842314886 container died f22006b2f1ed6e452bc338277cb6cb8c7f88b4bd0643164d7d922d3a22e105ca (image=quay.io/ceph/ceph:v18, name=youthful_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 24 13:19:52 np0005533938 systemd[1]: var-lib-containers-storage-overlay-ee484dfdd6bbe71bb539a6240aced83d58e22072997bbda19c11884bd14baec1-merged.mount: Deactivated successfully.
Nov 24 13:19:52 np0005533938 podman[78377]: 2025-11-24 18:19:52.527656343 +0000 UTC m=+0.892964558 container remove f22006b2f1ed6e452bc338277cb6cb8c7f88b4bd0643164d7d922d3a22e105ca (image=quay.io/ceph/ceph:v18, name=youthful_tharp, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:19:52 np0005533938 systemd[1]: libpod-conmon-f22006b2f1ed6e452bc338277cb6cb8c7f88b4bd0643164d7d922d3a22e105ca.scope: Deactivated successfully.
Nov 24 13:19:52 np0005533938 podman[78477]: 2025-11-24 18:19:52.623408745 +0000 UTC m=+0.071121390 container create 0e2531a51d1b08ef3228c7e94afcba267208da57b990da46e842cd7dca3f90e2 (image=quay.io/ceph/ceph:v18, name=great_kilby, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:19:52 np0005533938 systemd[1]: Started libpod-conmon-0e2531a51d1b08ef3228c7e94afcba267208da57b990da46e842cd7dca3f90e2.scope.
Nov 24 13:19:52 np0005533938 podman[78477]: 2025-11-24 18:19:52.594340278 +0000 UTC m=+0.042052983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:52 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:52 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae09d0eee0691ec9aa43b58051d76c60e793ac89c406d1031099fdd7c4b3c418/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:52 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae09d0eee0691ec9aa43b58051d76c60e793ac89c406d1031099fdd7c4b3c418/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:52 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae09d0eee0691ec9aa43b58051d76c60e793ac89c406d1031099fdd7c4b3c418/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:52 np0005533938 podman[78477]: 2025-11-24 18:19:52.714796904 +0000 UTC m=+0.162509529 container init 0e2531a51d1b08ef3228c7e94afcba267208da57b990da46e842cd7dca3f90e2 (image=quay.io/ceph/ceph:v18, name=great_kilby, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:19:52 np0005533938 podman[78477]: 2025-11-24 18:19:52.724200946 +0000 UTC m=+0.171913551 container start 0e2531a51d1b08ef3228c7e94afcba267208da57b990da46e842cd7dca3f90e2 (image=quay.io/ceph/ceph:v18, name=great_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:19:52 np0005533938 podman[78477]: 2025-11-24 18:19:52.727973273 +0000 UTC m=+0.175685878 container attach 0e2531a51d1b08ef3228c7e94afcba267208da57b990da46e842cd7dca3f90e2 (image=quay.io/ceph/ceph:v18, name=great_kilby, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:19:53 np0005533938 ceph-mon[74927]: Added label _admin to host compute-0
Nov 24 13:19:53 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/1867413685' entity='client.admin' 
Nov 24 13:19:53 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Nov 24 13:19:53 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2903664547' entity='client.admin' 
Nov 24 13:19:53 np0005533938 great_kilby[78493]: set mgr/dashboard/cluster/status
Nov 24 13:19:53 np0005533938 systemd[1]: libpod-0e2531a51d1b08ef3228c7e94afcba267208da57b990da46e842cd7dca3f90e2.scope: Deactivated successfully.
Nov 24 13:19:53 np0005533938 podman[78477]: 2025-11-24 18:19:53.388915335 +0000 UTC m=+0.836627940 container died 0e2531a51d1b08ef3228c7e94afcba267208da57b990da46e842cd7dca3f90e2 (image=quay.io/ceph/ceph:v18, name=great_kilby, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:19:53 np0005533938 systemd[1]: var-lib-containers-storage-overlay-ae09d0eee0691ec9aa43b58051d76c60e793ac89c406d1031099fdd7c4b3c418-merged.mount: Deactivated successfully.
Nov 24 13:19:53 np0005533938 podman[78477]: 2025-11-24 18:19:53.425642819 +0000 UTC m=+0.873355424 container remove 0e2531a51d1b08ef3228c7e94afcba267208da57b990da46e842cd7dca3f90e2 (image=quay.io/ceph/ceph:v18, name=great_kilby, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:19:53 np0005533938 systemd[1]: libpod-conmon-0e2531a51d1b08ef3228c7e94afcba267208da57b990da46e842cd7dca3f90e2.scope: Deactivated successfully.
Nov 24 13:19:53 np0005533938 podman[78540]: 2025-11-24 18:19:53.55636766 +0000 UTC m=+0.035214896 container create 83223adfc10d943e67f60e282e6d29afc3442af328186165844bf5b0142580d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 13:19:53 np0005533938 systemd[1]: Started libpod-conmon-83223adfc10d943e67f60e282e6d29afc3442af328186165844bf5b0142580d0.scope.
Nov 24 13:19:53 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:53 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7549dae49b8e0587ea340b297ba987b00943c40a7b201c2aa5730103053632d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:53 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7549dae49b8e0587ea340b297ba987b00943c40a7b201c2aa5730103053632d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:53 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7549dae49b8e0587ea340b297ba987b00943c40a7b201c2aa5730103053632d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:53 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7549dae49b8e0587ea340b297ba987b00943c40a7b201c2aa5730103053632d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:53 np0005533938 podman[78540]: 2025-11-24 18:19:53.540235766 +0000 UTC m=+0.019083052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:19:53 np0005533938 podman[78540]: 2025-11-24 18:19:53.642588317 +0000 UTC m=+0.121435543 container init 83223adfc10d943e67f60e282e6d29afc3442af328186165844bf5b0142580d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chatelet, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:19:53 np0005533938 podman[78540]: 2025-11-24 18:19:53.648366435 +0000 UTC m=+0.127213661 container start 83223adfc10d943e67f60e282e6d29afc3442af328186165844bf5b0142580d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chatelet, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:19:53 np0005533938 podman[78540]: 2025-11-24 18:19:53.651098596 +0000 UTC m=+0.129945812 container attach 83223adfc10d943e67f60e282e6d29afc3442af328186165844bf5b0142580d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chatelet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 13:19:53 np0005533938 python3[78586]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:19:53 np0005533938 podman[78587]: 2025-11-24 18:19:53.944491679 +0000 UTC m=+0.037168127 container create a08f139dfee75b0aedd6a9d48bafa7d89e032d4aa6101c09934607b15e1ac5b9 (image=quay.io/ceph/ceph:v18, name=festive_meitner, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 13:19:53 np0005533938 systemd[1]: Started libpod-conmon-a08f139dfee75b0aedd6a9d48bafa7d89e032d4aa6101c09934607b15e1ac5b9.scope.
Nov 24 13:19:53 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:53 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424b445c652075fc2e8f4f194858af6784d94dc29906fc5d36f9f4533acb11f2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:54 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424b445c652075fc2e8f4f194858af6784d94dc29906fc5d36f9f4533acb11f2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:54 np0005533938 podman[78587]: 2025-11-24 18:19:54.019055135 +0000 UTC m=+0.111731583 container init a08f139dfee75b0aedd6a9d48bafa7d89e032d4aa6101c09934607b15e1ac5b9 (image=quay.io/ceph/ceph:v18, name=festive_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 13:19:54 np0005533938 podman[78587]: 2025-11-24 18:19:53.927010639 +0000 UTC m=+0.019687087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:54 np0005533938 podman[78587]: 2025-11-24 18:19:54.028815436 +0000 UTC m=+0.121491884 container start a08f139dfee75b0aedd6a9d48bafa7d89e032d4aa6101c09934607b15e1ac5b9 (image=quay.io/ceph/ceph:v18, name=festive_meitner, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 13:19:54 np0005533938 podman[78587]: 2025-11-24 18:19:54.032724117 +0000 UTC m=+0.125400585 container attach a08f139dfee75b0aedd6a9d48bafa7d89e032d4aa6101c09934607b15e1ac5b9 (image=quay.io/ceph/ceph:v18, name=festive_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:19:54 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2903664547' entity='client.admin' 
Nov 24 13:19:54 np0005533938 ceph-mgr[75218]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 24 13:19:54 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:19:54 np0005533938 ceph-mon[74927]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 24 13:19:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Nov 24 13:19:54 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2240054925' entity='client.admin' 
Nov 24 13:19:54 np0005533938 podman[78587]: 2025-11-24 18:19:54.576188669 +0000 UTC m=+0.668865117 container died a08f139dfee75b0aedd6a9d48bafa7d89e032d4aa6101c09934607b15e1ac5b9 (image=quay.io/ceph/ceph:v18, name=festive_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 13:19:54 np0005533938 systemd[1]: libpod-a08f139dfee75b0aedd6a9d48bafa7d89e032d4aa6101c09934607b15e1ac5b9.scope: Deactivated successfully.
Nov 24 13:19:54 np0005533938 systemd[1]: var-lib-containers-storage-overlay-424b445c652075fc2e8f4f194858af6784d94dc29906fc5d36f9f4533acb11f2-merged.mount: Deactivated successfully.
Nov 24 13:19:54 np0005533938 podman[78587]: 2025-11-24 18:19:54.61940755 +0000 UTC m=+0.712083988 container remove a08f139dfee75b0aedd6a9d48bafa7d89e032d4aa6101c09934607b15e1ac5b9 (image=quay.io/ceph/ceph:v18, name=festive_meitner, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:19:54 np0005533938 systemd[1]: libpod-conmon-a08f139dfee75b0aedd6a9d48bafa7d89e032d4aa6101c09934607b15e1ac5b9.scope: Deactivated successfully.
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]: [
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:    {
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:        "available": false,
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:        "ceph_device": false,
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:        "lsm_data": {},
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:        "lvs": [],
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:        "path": "/dev/sr0",
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:        "rejected_reasons": [
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "Insufficient space (<5GB)",
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "Has a FileSystem"
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:        ],
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:        "sys_api": {
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "actuators": null,
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "device_nodes": "sr0",
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "devname": "sr0",
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "human_readable_size": "482.00 KB",
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "id_bus": "ata",
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "model": "QEMU DVD-ROM",
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "nr_requests": "2",
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "parent": "/dev/sr0",
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "partitions": {},
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "path": "/dev/sr0",
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "removable": "1",
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "rev": "2.5+",
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "ro": "0",
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "rotational": "1",
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "sas_address": "",
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "sas_device_handle": "",
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "scheduler_mode": "mq-deadline",
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "sectors": 0,
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "sectorsize": "2048",
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "size": 493568.0,
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "support_discard": "2048",
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "type": "disk",
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:            "vendor": "QEMU"
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:        }
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]:    }
Nov 24 13:19:54 np0005533938 elegant_chatelet[78556]: ]
Nov 24 13:19:54 np0005533938 podman[78540]: 2025-11-24 18:19:54.987149614 +0000 UTC m=+1.465996840 container died 83223adfc10d943e67f60e282e6d29afc3442af328186165844bf5b0142580d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:19:54 np0005533938 systemd[1]: libpod-83223adfc10d943e67f60e282e6d29afc3442af328186165844bf5b0142580d0.scope: Deactivated successfully.
Nov 24 13:19:54 np0005533938 systemd[1]: libpod-83223adfc10d943e67f60e282e6d29afc3442af328186165844bf5b0142580d0.scope: Consumed 1.347s CPU time.
Nov 24 13:19:55 np0005533938 systemd[1]: var-lib-containers-storage-overlay-a7549dae49b8e0587ea340b297ba987b00943c40a7b201c2aa5730103053632d-merged.mount: Deactivated successfully.
Nov 24 13:19:55 np0005533938 podman[78540]: 2025-11-24 18:19:55.043913684 +0000 UTC m=+1.522760910 container remove 83223adfc10d943e67f60e282e6d29afc3442af328186165844bf5b0142580d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:19:55 np0005533938 systemd[1]: libpod-conmon-83223adfc10d943e67f60e282e6d29afc3442af328186165844bf5b0142580d0.scope: Deactivated successfully.
Nov 24 13:19:55 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:19:55 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:55 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:19:55 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:55 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:19:55 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:55 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:19:55 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:55 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 24 13:19:55 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 13:19:55 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:19:55 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:19:55 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:19:55 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:19:55 np0005533938 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 24 13:19:55 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 24 13:19:55 np0005533938 ceph-mon[74927]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 24 13:19:55 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2240054925' entity='client.admin' 
Nov 24 13:19:55 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:55 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:55 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:55 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:19:55 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 13:19:55 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:19:55 np0005533938 ansible-async_wrapper.py[80446]: Invoked with j620571266268 30 /home/zuul/.ansible/tmp/ansible-tmp-1764008394.9836938-36683-29946511160405/AnsiballZ_command.py _
Nov 24 13:19:55 np0005533938 ansible-async_wrapper.py[80498]: Starting module and watcher
Nov 24 13:19:55 np0005533938 ansible-async_wrapper.py[80498]: Start watching 80499 (30)
Nov 24 13:19:55 np0005533938 ansible-async_wrapper.py[80499]: Start module (80499)
Nov 24 13:19:55 np0005533938 ansible-async_wrapper.py[80446]: Return async_wrapper task started.
Nov 24 13:19:55 np0005533938 python3[80501]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:19:55 np0005533938 podman[80575]: 2025-11-24 18:19:55.993030553 +0000 UTC m=+0.054630015 container create 96017e097dea415a900c63b9111d9dfd0bf659062706ce9e00b4fda9df3c45f2 (image=quay.io/ceph/ceph:v18, name=recursing_dhawan, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:19:56 np0005533938 systemd[1]: Started libpod-conmon-96017e097dea415a900c63b9111d9dfd0bf659062706ce9e00b4fda9df3c45f2.scope.
Nov 24 13:19:56 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:56 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64f114bfae53f61884629050bc58776c095c866d2ef2c531d72dd12e643fbd86/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:56 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64f114bfae53f61884629050bc58776c095c866d2ef2c531d72dd12e643fbd86/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:56 np0005533938 podman[80575]: 2025-11-24 18:19:55.971242503 +0000 UTC m=+0.032841975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:56 np0005533938 podman[80575]: 2025-11-24 18:19:56.082789781 +0000 UTC m=+0.144389343 container init 96017e097dea415a900c63b9111d9dfd0bf659062706ce9e00b4fda9df3c45f2 (image=quay.io/ceph/ceph:v18, name=recursing_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:19:56 np0005533938 podman[80575]: 2025-11-24 18:19:56.092596783 +0000 UTC m=+0.154196275 container start 96017e097dea415a900c63b9111d9dfd0bf659062706ce9e00b4fda9df3c45f2 (image=quay.io/ceph/ceph:v18, name=recursing_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:19:56 np0005533938 podman[80575]: 2025-11-24 18:19:56.096259207 +0000 UTC m=+0.157858679 container attach 96017e097dea415a900c63b9111d9dfd0bf659062706ce9e00b4fda9df3c45f2 (image=quay.io/ceph/ceph:v18, name=recursing_dhawan, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 13:19:56 np0005533938 ceph-mon[74927]: Updating compute-0:/etc/ceph/ceph.conf
Nov 24 13:19:56 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:19:56 np0005533938 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config/ceph.conf
Nov 24 13:19:56 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config/ceph.conf
Nov 24 13:19:56 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 13:19:56 np0005533938 recursing_dhawan[80616]: 
Nov 24 13:19:56 np0005533938 recursing_dhawan[80616]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 24 13:19:56 np0005533938 systemd[1]: libpod-96017e097dea415a900c63b9111d9dfd0bf659062706ce9e00b4fda9df3c45f2.scope: Deactivated successfully.
Nov 24 13:19:56 np0005533938 podman[80575]: 2025-11-24 18:19:56.704368911 +0000 UTC m=+0.765968423 container died 96017e097dea415a900c63b9111d9dfd0bf659062706ce9e00b4fda9df3c45f2 (image=quay.io/ceph/ceph:v18, name=recursing_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:19:56 np0005533938 systemd[1]: var-lib-containers-storage-overlay-64f114bfae53f61884629050bc58776c095c866d2ef2c531d72dd12e643fbd86-merged.mount: Deactivated successfully.
Nov 24 13:19:56 np0005533938 podman[80575]: 2025-11-24 18:19:56.754846629 +0000 UTC m=+0.816446101 container remove 96017e097dea415a900c63b9111d9dfd0bf659062706ce9e00b4fda9df3c45f2 (image=quay.io/ceph/ceph:v18, name=recursing_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:19:56 np0005533938 systemd[1]: libpod-conmon-96017e097dea415a900c63b9111d9dfd0bf659062706ce9e00b4fda9df3c45f2.scope: Deactivated successfully.
Nov 24 13:19:56 np0005533938 ansible-async_wrapper.py[80499]: Module complete (80499)
Nov 24 13:19:57 np0005533938 python3[81011]: ansible-ansible.legacy.async_status Invoked with jid=j620571266268.80446 mode=status _async_dir=/root/.ansible_async
Nov 24 13:19:57 np0005533938 ceph-mon[74927]: Updating compute-0:/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config/ceph.conf
Nov 24 13:19:57 np0005533938 python3[81147]: ansible-ansible.legacy.async_status Invoked with jid=j620571266268.80446 mode=cleanup _async_dir=/root/.ansible_async
Nov 24 13:19:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:19:57 np0005533938 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 13:19:57 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 13:19:57 np0005533938 python3[81345]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 24 13:19:58 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:19:59 np0005533938 ceph-mon[74927]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 24 13:19:59 np0005533938 python3[81523]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:19:59 np0005533938 podman[81600]: 2025-11-24 18:19:59.20312492 +0000 UTC m=+0.039858605 container create 86e29a39f4d8f3daa52ecc9db10ab0c5991b7de3ba8717962b07b5c6649fc2fa (image=quay.io/ceph/ceph:v18, name=eloquent_hofstadter, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:19:59 np0005533938 systemd[1]: Started libpod-conmon-86e29a39f4d8f3daa52ecc9db10ab0c5991b7de3ba8717962b07b5c6649fc2fa.scope.
Nov 24 13:19:59 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:19:59 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404b13da7f582898f1551392d420569ac4d41dd47626b522da05a27a1baf3a0d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:59 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404b13da7f582898f1551392d420569ac4d41dd47626b522da05a27a1baf3a0d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:59 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/404b13da7f582898f1551392d420569ac4d41dd47626b522da05a27a1baf3a0d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:19:59 np0005533938 podman[81600]: 2025-11-24 18:19:59.186948035 +0000 UTC m=+0.023681750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:19:59 np0005533938 podman[81600]: 2025-11-24 18:19:59.368959224 +0000 UTC m=+0.205692959 container init 86e29a39f4d8f3daa52ecc9db10ab0c5991b7de3ba8717962b07b5c6649fc2fa (image=quay.io/ceph/ceph:v18, name=eloquent_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 13:19:59 np0005533938 podman[81600]: 2025-11-24 18:19:59.374292031 +0000 UTC m=+0.211025726 container start 86e29a39f4d8f3daa52ecc9db10ab0c5991b7de3ba8717962b07b5c6649fc2fa (image=quay.io/ceph/ceph:v18, name=eloquent_hofstadter, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 13:19:59 np0005533938 podman[81600]: 2025-11-24 18:19:59.378678414 +0000 UTC m=+0.215412109 container attach 86e29a39f4d8f3daa52ecc9db10ab0c5991b7de3ba8717962b07b5c6649fc2fa (image=quay.io/ceph/ceph:v18, name=eloquent_hofstadter, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:19:59 np0005533938 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config/ceph.client.admin.keyring
Nov 24 13:19:59 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config/ceph.client.admin.keyring
Nov 24 13:19:59 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 13:19:59 np0005533938 eloquent_hofstadter[81650]: 
Nov 24 13:19:59 np0005533938 eloquent_hofstadter[81650]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 24 13:19:59 np0005533938 systemd[1]: libpod-86e29a39f4d8f3daa52ecc9db10ab0c5991b7de3ba8717962b07b5c6649fc2fa.scope: Deactivated successfully.
Nov 24 13:19:59 np0005533938 podman[81600]: 2025-11-24 18:19:59.912793645 +0000 UTC m=+0.749527330 container died 86e29a39f4d8f3daa52ecc9db10ab0c5991b7de3ba8717962b07b5c6649fc2fa (image=quay.io/ceph/ceph:v18, name=eloquent_hofstadter, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 13:20:00 np0005533938 systemd[1]: var-lib-containers-storage-overlay-404b13da7f582898f1551392d420569ac4d41dd47626b522da05a27a1baf3a0d-merged.mount: Deactivated successfully.
Nov 24 13:20:00 np0005533938 podman[81600]: 2025-11-24 18:20:00.084406877 +0000 UTC m=+0.921140582 container remove 86e29a39f4d8f3daa52ecc9db10ab0c5991b7de3ba8717962b07b5c6649fc2fa (image=quay.io/ceph/ceph:v18, name=eloquent_hofstadter, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:20:00 np0005533938 systemd[1]: libpod-conmon-86e29a39f4d8f3daa52ecc9db10ab0c5991b7de3ba8717962b07b5c6649fc2fa.scope: Deactivated successfully.
Nov 24 13:20:00 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:20:00 np0005533938 python3[82196]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:20:00 np0005533938 podman[82244]: 2025-11-24 18:20:00.581102267 +0000 UTC m=+0.055092548 container create 3e7e41fe54c5b1c7599cad53f7e8788c97b5ed1b66595f7ecc3e2850dce8f889 (image=quay.io/ceph/ceph:v18, name=interesting_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 13:20:00 np0005533938 systemd[1]: Started libpod-conmon-3e7e41fe54c5b1c7599cad53f7e8788c97b5ed1b66595f7ecc3e2850dce8f889.scope.
Nov 24 13:20:00 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:00 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8154eaf8899a373e75c21ccd2cf89316033ae49ed89b25271ba54bb81b6aacc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:00 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8154eaf8899a373e75c21ccd2cf89316033ae49ed89b25271ba54bb81b6aacc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:00 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8154eaf8899a373e75c21ccd2cf89316033ae49ed89b25271ba54bb81b6aacc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:00 np0005533938 podman[82244]: 2025-11-24 18:20:00.552690996 +0000 UTC m=+0.026681287 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:20:00 np0005533938 podman[82244]: 2025-11-24 18:20:00.673052731 +0000 UTC m=+0.147043002 container init 3e7e41fe54c5b1c7599cad53f7e8788c97b5ed1b66595f7ecc3e2850dce8f889 (image=quay.io/ceph/ceph:v18, name=interesting_buck, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:20:00 np0005533938 podman[82244]: 2025-11-24 18:20:00.678906691 +0000 UTC m=+0.152896952 container start 3e7e41fe54c5b1c7599cad53f7e8788c97b5ed1b66595f7ecc3e2850dce8f889 (image=quay.io/ceph/ceph:v18, name=interesting_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:20:00 np0005533938 podman[82244]: 2025-11-24 18:20:00.702148029 +0000 UTC m=+0.176138300 container attach 3e7e41fe54c5b1c7599cad53f7e8788c97b5ed1b66595f7ecc3e2850dce8f889 (image=quay.io/ceph/ceph:v18, name=interesting_buck, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 13:20:00 np0005533938 ansible-async_wrapper.py[80498]: Done in kid B.
Nov 24 13:20:00 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:20:00 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:00 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:20:00 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:00 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:20:00 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:00 np0005533938 ceph-mgr[75218]: [progress INFO root] update: starting ev fdda5bf1-6e1e-476a-b44b-c7c92d3cdd82 (Updating crash deployment (+1 -> 1))
Nov 24 13:20:00 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 24 13:20:00 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 13:20:00 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 24 13:20:00 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:20:00 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:20:00 np0005533938 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 24 13:20:00 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 24 13:20:01 np0005533938 ceph-mon[74927]: Updating compute-0:/var/lib/ceph/e5ee928f-099b-569b-93c9-ecf025cbb50d/config/ceph.client.admin.keyring
Nov 24 13:20:01 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:01 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:01 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:01 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 24 13:20:01 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 24 13:20:01 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Nov 24 13:20:01 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3033777510' entity='client.admin' 
Nov 24 13:20:01 np0005533938 systemd[1]: libpod-3e7e41fe54c5b1c7599cad53f7e8788c97b5ed1b66595f7ecc3e2850dce8f889.scope: Deactivated successfully.
Nov 24 13:20:01 np0005533938 podman[82244]: 2025-11-24 18:20:01.273305513 +0000 UTC m=+0.747295774 container died 3e7e41fe54c5b1c7599cad53f7e8788c97b5ed1b66595f7ecc3e2850dce8f889 (image=quay.io/ceph/ceph:v18, name=interesting_buck, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:20:01 np0005533938 podman[82506]: 2025-11-24 18:20:01.300986544 +0000 UTC m=+0.055333753 container create 16b247e5bd012cb1dca1288c039f098d5a525c4f155724ff499bc822fb5e10e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 13:20:01 np0005533938 systemd[1]: var-lib-containers-storage-overlay-f8154eaf8899a373e75c21ccd2cf89316033ae49ed89b25271ba54bb81b6aacc-merged.mount: Deactivated successfully.
Nov 24 13:20:01 np0005533938 podman[82244]: 2025-11-24 18:20:01.357937328 +0000 UTC m=+0.831927589 container remove 3e7e41fe54c5b1c7599cad53f7e8788c97b5ed1b66595f7ecc3e2850dce8f889 (image=quay.io/ceph/ceph:v18, name=interesting_buck, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 13:20:01 np0005533938 systemd[1]: libpod-conmon-3e7e41fe54c5b1c7599cad53f7e8788c97b5ed1b66595f7ecc3e2850dce8f889.scope: Deactivated successfully.
Nov 24 13:20:01 np0005533938 podman[82506]: 2025-11-24 18:20:01.266276022 +0000 UTC m=+0.020623271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:01 np0005533938 systemd[1]: Started libpod-conmon-16b247e5bd012cb1dca1288c039f098d5a525c4f155724ff499bc822fb5e10e7.scope.
Nov 24 13:20:01 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:01 np0005533938 podman[82506]: 2025-11-24 18:20:01.527323813 +0000 UTC m=+0.281671062 container init 16b247e5bd012cb1dca1288c039f098d5a525c4f155724ff499bc822fb5e10e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chaplygin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 13:20:01 np0005533938 podman[82506]: 2025-11-24 18:20:01.533338208 +0000 UTC m=+0.287685417 container start 16b247e5bd012cb1dca1288c039f098d5a525c4f155724ff499bc822fb5e10e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 13:20:01 np0005533938 jovial_chaplygin[82533]: 167 167
Nov 24 13:20:01 np0005533938 systemd[1]: libpod-16b247e5bd012cb1dca1288c039f098d5a525c4f155724ff499bc822fb5e10e7.scope: Deactivated successfully.
Nov 24 13:20:01 np0005533938 podman[82506]: 2025-11-24 18:20:01.538718706 +0000 UTC m=+0.293065915 container attach 16b247e5bd012cb1dca1288c039f098d5a525c4f155724ff499bc822fb5e10e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chaplygin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 13:20:01 np0005533938 podman[82506]: 2025-11-24 18:20:01.540206714 +0000 UTC m=+0.294553923 container died 16b247e5bd012cb1dca1288c039f098d5a525c4f155724ff499bc822fb5e10e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 13:20:01 np0005533938 python3[82560]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:20:01 np0005533938 systemd[1]: var-lib-containers-storage-overlay-902ef1fb5c1db6898db6e8bff6cf304bd6b07d1ba1e5354c20b7736401be0395-merged.mount: Deactivated successfully.
Nov 24 13:20:01 np0005533938 podman[82506]: 2025-11-24 18:20:01.689309008 +0000 UTC m=+0.443656227 container remove 16b247e5bd012cb1dca1288c039f098d5a525c4f155724ff499bc822fb5e10e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chaplygin, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:20:01 np0005533938 systemd[1]: libpod-conmon-16b247e5bd012cb1dca1288c039f098d5a525c4f155724ff499bc822fb5e10e7.scope: Deactivated successfully.
Nov 24 13:20:01 np0005533938 podman[82576]: 2025-11-24 18:20:01.775072222 +0000 UTC m=+0.116489155 container create 0b65e2ff6f159bec9c3f0f2ced076d1d8a8a14a8bc7eb4db95e87622d984ff35 (image=quay.io/ceph/ceph:v18, name=peaceful_swanson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 13:20:01 np0005533938 podman[82576]: 2025-11-24 18:20:01.698132644 +0000 UTC m=+0.039549617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:20:01 np0005533938 systemd[1]: Started libpod-conmon-0b65e2ff6f159bec9c3f0f2ced076d1d8a8a14a8bc7eb4db95e87622d984ff35.scope.
Nov 24 13:20:01 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:01 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d6bdaddf4e7eef32e1d93aea15ffce6bd3df2e20cd2c5f7f545f7ac0fa23a85/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:01 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d6bdaddf4e7eef32e1d93aea15ffce6bd3df2e20cd2c5f7f545f7ac0fa23a85/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:01 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d6bdaddf4e7eef32e1d93aea15ffce6bd3df2e20cd2c5f7f545f7ac0fa23a85/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:01 np0005533938 systemd[1]: Reloading.
Nov 24 13:20:01 np0005533938 podman[82576]: 2025-11-24 18:20:01.902752205 +0000 UTC m=+0.244169188 container init 0b65e2ff6f159bec9c3f0f2ced076d1d8a8a14a8bc7eb4db95e87622d984ff35 (image=quay.io/ceph/ceph:v18, name=peaceful_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 24 13:20:01 np0005533938 podman[82576]: 2025-11-24 18:20:01.912019243 +0000 UTC m=+0.253436176 container start 0b65e2ff6f159bec9c3f0f2ced076d1d8a8a14a8bc7eb4db95e87622d984ff35 (image=quay.io/ceph/ceph:v18, name=peaceful_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 24 13:20:01 np0005533938 podman[82576]: 2025-11-24 18:20:01.915856972 +0000 UTC m=+0.257273945 container attach 0b65e2ff6f159bec9c3f0f2ced076d1d8a8a14a8bc7eb4db95e87622d984ff35 (image=quay.io/ceph/ceph:v18, name=peaceful_swanson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 13:20:01 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:20:01 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:20:02 np0005533938 ceph-mon[74927]: Deploying daemon crash.compute-0 on compute-0
Nov 24 13:20:02 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/3033777510' entity='client.admin' 
Nov 24 13:20:02 np0005533938 systemd[1]: Reloading.
Nov 24 13:20:02 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:20:02 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:20:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:20:02 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:20:02 np0005533938 systemd[1]: Starting Ceph crash.compute-0 for e5ee928f-099b-569b-93c9-ecf025cbb50d...
Nov 24 13:20:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Nov 24 13:20:02 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/605511265' entity='client.admin' 
Nov 24 13:20:02 np0005533938 systemd[1]: libpod-0b65e2ff6f159bec9c3f0f2ced076d1d8a8a14a8bc7eb4db95e87622d984ff35.scope: Deactivated successfully.
Nov 24 13:20:02 np0005533938 podman[82576]: 2025-11-24 18:20:02.524799066 +0000 UTC m=+0.866216079 container died 0b65e2ff6f159bec9c3f0f2ced076d1d8a8a14a8bc7eb4db95e87622d984ff35 (image=quay.io/ceph/ceph:v18, name=peaceful_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 13:20:02 np0005533938 systemd[1]: var-lib-containers-storage-overlay-2d6bdaddf4e7eef32e1d93aea15ffce6bd3df2e20cd2c5f7f545f7ac0fa23a85-merged.mount: Deactivated successfully.
Nov 24 13:20:02 np0005533938 podman[82576]: 2025-11-24 18:20:02.584894221 +0000 UTC m=+0.926311154 container remove 0b65e2ff6f159bec9c3f0f2ced076d1d8a8a14a8bc7eb4db95e87622d984ff35 (image=quay.io/ceph/ceph:v18, name=peaceful_swanson, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 13:20:02 np0005533938 systemd[1]: libpod-conmon-0b65e2ff6f159bec9c3f0f2ced076d1d8a8a14a8bc7eb4db95e87622d984ff35.scope: Deactivated successfully.
Nov 24 13:20:02 np0005533938 podman[82759]: 2025-11-24 18:20:02.692518908 +0000 UTC m=+0.033434581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:02 np0005533938 podman[82759]: 2025-11-24 18:20:02.778147239 +0000 UTC m=+0.119062942 container create cd3250af4db771b6a0133939d88755e021a29e7d1ca9d8eb073c2b3ab97e18ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 13:20:02 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abf9c670d57f1ac4dcd46a55d2b473e74cc2a2c9703cd8e10016e57fe3663fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:02 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abf9c670d57f1ac4dcd46a55d2b473e74cc2a2c9703cd8e10016e57fe3663fe/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:02 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abf9c670d57f1ac4dcd46a55d2b473e74cc2a2c9703cd8e10016e57fe3663fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:02 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abf9c670d57f1ac4dcd46a55d2b473e74cc2a2c9703cd8e10016e57fe3663fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:02 np0005533938 podman[82759]: 2025-11-24 18:20:02.900539156 +0000 UTC m=+0.241454859 container init cd3250af4db771b6a0133939d88755e021a29e7d1ca9d8eb073c2b3ab97e18ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-crash-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 13:20:02 np0005533938 podman[82759]: 2025-11-24 18:20:02.910218565 +0000 UTC m=+0.251134268 container start cd3250af4db771b6a0133939d88755e021a29e7d1ca9d8eb073c2b3ab97e18ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-crash-compute-0, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 13:20:02 np0005533938 python3[82801]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:20:03 np0005533938 bash[82759]: cd3250af4db771b6a0133939d88755e021a29e7d1ca9d8eb073c2b3ab97e18ec
Nov 24 13:20:03 np0005533938 systemd[1]: Started Ceph crash.compute-0 for e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:20:03 np0005533938 podman[82805]: 2025-11-24 18:20:03.087283777 +0000 UTC m=+0.072581807 container create b88b497727ea2a11ce3db1f98d8d9554680e7ce320c559849ac625105175d7bc (image=quay.io/ceph/ceph:v18, name=sleepy_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:03 np0005533938 ceph-mgr[75218]: [progress INFO root] complete: finished ev fdda5bf1-6e1e-476a-b44b-c7c92d3cdd82 (Updating crash deployment (+1 -> 1))
Nov 24 13:20:03 np0005533938 ceph-mgr[75218]: [progress INFO root] Completed event fdda5bf1-6e1e-476a-b44b-c7c92d3cdd82 (Updating crash deployment (+1 -> 1)) in 2 seconds
Nov 24 13:20:03 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-crash-compute-0[82799]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:03 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev ab68b6b2-560b-49f3-9a0c-9ea13e98aaea does not exist
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:03 np0005533938 ceph-mgr[75218]: [progress INFO root] update: starting ev 31377859-8c04-4f77-a6b4-33708e64b87c (Updating mgr deployment (+1 -> 2))
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.uspkow", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.uspkow", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.uspkow", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 24 13:20:03 np0005533938 systemd[1]: Started libpod-conmon-b88b497727ea2a11ce3db1f98d8d9554680e7ce320c559849ac625105175d7bc.scope.
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:20:03 np0005533938 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.uspkow on compute-0
Nov 24 13:20:03 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.uspkow on compute-0
Nov 24 13:20:03 np0005533938 podman[82805]: 2025-11-24 18:20:03.060345724 +0000 UTC m=+0.045643774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:20:03 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:03 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eac676a84535fc3db73ffdbaa0b3d92c89afd660b35f7945b050fb6096720cb0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:03 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eac676a84535fc3db73ffdbaa0b3d92c89afd660b35f7945b050fb6096720cb0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:03 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eac676a84535fc3db73ffdbaa0b3d92c89afd660b35f7945b050fb6096720cb0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:03 np0005533938 podman[82805]: 2025-11-24 18:20:03.196370261 +0000 UTC m=+0.181668331 container init b88b497727ea2a11ce3db1f98d8d9554680e7ce320c559849ac625105175d7bc (image=quay.io/ceph/ceph:v18, name=sleepy_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:20:03 np0005533938 podman[82805]: 2025-11-24 18:20:03.215122514 +0000 UTC m=+0.200420584 container start b88b497727ea2a11ce3db1f98d8d9554680e7ce320c559849ac625105175d7bc (image=quay.io/ceph/ceph:v18, name=sleepy_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 13:20:03 np0005533938 podman[82805]: 2025-11-24 18:20:03.220210054 +0000 UTC m=+0.205508124 container attach b88b497727ea2a11ce3db1f98d8d9554680e7ce320c559849ac625105175d7bc (image=quay.io/ceph/ceph:v18, name=sleepy_chaum, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 13:20:03 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-crash-compute-0[82799]: 2025-11-24T18:20:03.306+0000 7fdc9c488640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 24 13:20:03 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-crash-compute-0[82799]: 2025-11-24T18:20:03.306+0000 7fdc9c488640 -1 AuthRegistry(0x7fdc94067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 24 13:20:03 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-crash-compute-0[82799]: 2025-11-24T18:20:03.308+0000 7fdc9c488640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 24 13:20:03 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-crash-compute-0[82799]: 2025-11-24T18:20:03.308+0000 7fdc9c488640 -1 AuthRegistry(0x7fdc9c487000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 24 13:20:03 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-crash-compute-0[82799]: 2025-11-24T18:20:03.311+0000 7fdc9a1fd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 24 13:20:03 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-crash-compute-0[82799]: 2025-11-24T18:20:03.311+0000 7fdc9c488640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 24 13:20:03 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-crash-compute-0[82799]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 24 13:20:03 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-crash-compute-0[82799]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/605511265' entity='client.admin' 
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.uspkow", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.uspkow", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Nov 24 13:20:03 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2121276909' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 24 13:20:03 np0005533938 podman[82998]: 2025-11-24 18:20:03.927393955 +0000 UTC m=+0.046984119 container create a33bf2823f3ca2f433e325849a8e9ee1cac4b3be8cce5f20e51114f88e7a403f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 13:20:03 np0005533938 systemd[1]: Started libpod-conmon-a33bf2823f3ca2f433e325849a8e9ee1cac4b3be8cce5f20e51114f88e7a403f.scope.
Nov 24 13:20:04 np0005533938 podman[82998]: 2025-11-24 18:20:03.908783557 +0000 UTC m=+0.028373721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:04 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:04 np0005533938 podman[82998]: 2025-11-24 18:20:04.025163999 +0000 UTC m=+0.144754183 container init a33bf2823f3ca2f433e325849a8e9ee1cac4b3be8cce5f20e51114f88e7a403f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wescoff, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:20:04 np0005533938 podman[82998]: 2025-11-24 18:20:04.030997199 +0000 UTC m=+0.150587363 container start a33bf2823f3ca2f433e325849a8e9ee1cac4b3be8cce5f20e51114f88e7a403f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wescoff, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 13:20:04 np0005533938 podman[82998]: 2025-11-24 18:20:04.034101799 +0000 UTC m=+0.153691963 container attach a33bf2823f3ca2f433e325849a8e9ee1cac4b3be8cce5f20e51114f88e7a403f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:20:04 np0005533938 pensive_wescoff[83014]: 167 167
Nov 24 13:20:04 np0005533938 systemd[1]: libpod-a33bf2823f3ca2f433e325849a8e9ee1cac4b3be8cce5f20e51114f88e7a403f.scope: Deactivated successfully.
Nov 24 13:20:04 np0005533938 conmon[83014]: conmon a33bf2823f3ca2f433e3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a33bf2823f3ca2f433e325849a8e9ee1cac4b3be8cce5f20e51114f88e7a403f.scope/container/memory.events
Nov 24 13:20:04 np0005533938 podman[82998]: 2025-11-24 18:20:04.036825929 +0000 UTC m=+0.156416093 container died a33bf2823f3ca2f433e325849a8e9ee1cac4b3be8cce5f20e51114f88e7a403f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wescoff, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 13:20:04 np0005533938 systemd[1]: var-lib-containers-storage-overlay-f3b7f460d3cdad070b9f46081e6896fea3e8cb62547658ecc0bd6abce068e226-merged.mount: Deactivated successfully.
Nov 24 13:20:04 np0005533938 podman[82998]: 2025-11-24 18:20:04.074939549 +0000 UTC m=+0.194529733 container remove a33bf2823f3ca2f433e325849a8e9ee1cac4b3be8cce5f20e51114f88e7a403f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_wescoff, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:20:04 np0005533938 systemd[1]: libpod-conmon-a33bf2823f3ca2f433e325849a8e9ee1cac4b3be8cce5f20e51114f88e7a403f.scope: Deactivated successfully.
Nov 24 13:20:04 np0005533938 systemd[1]: Reloading.
Nov 24 13:20:04 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:20:04 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:20:04 np0005533938 systemd[1]: Reloading.
Nov 24 13:20:04 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:20:04 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:20:04 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:20:04 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 24 13:20:04 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 13:20:04 np0005533938 ceph-mon[74927]: Deploying daemon mgr.compute-0.uspkow on compute-0
Nov 24 13:20:04 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2121276909' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 24 13:20:04 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2121276909' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 24 13:20:04 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 24 13:20:04 np0005533938 sleepy_chaum[82822]: set require_min_compat_client to mimic
Nov 24 13:20:04 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 24 13:20:04 np0005533938 podman[82805]: 2025-11-24 18:20:04.537837969 +0000 UTC m=+1.523135999 container died b88b497727ea2a11ce3db1f98d8d9554680e7ce320c559849ac625105175d7bc (image=quay.io/ceph/ceph:v18, name=sleepy_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:20:04 np0005533938 systemd[1]: libpod-b88b497727ea2a11ce3db1f98d8d9554680e7ce320c559849ac625105175d7bc.scope: Deactivated successfully.
Nov 24 13:20:04 np0005533938 systemd[1]: var-lib-containers-storage-overlay-eac676a84535fc3db73ffdbaa0b3d92c89afd660b35f7945b050fb6096720cb0-merged.mount: Deactivated successfully.
Nov 24 13:20:04 np0005533938 systemd[1]: Starting Ceph mgr.compute-0.uspkow for e5ee928f-099b-569b-93c9-ecf025cbb50d...
Nov 24 13:20:04 np0005533938 podman[82805]: 2025-11-24 18:20:04.664213048 +0000 UTC m=+1.649511078 container remove b88b497727ea2a11ce3db1f98d8d9554680e7ce320c559849ac625105175d7bc (image=quay.io/ceph/ceph:v18, name=sleepy_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 13:20:04 np0005533938 ceph-mgr[75218]: [progress INFO root] Writing back 1 completed events
Nov 24 13:20:04 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 24 13:20:04 np0005533938 systemd[1]: libpod-conmon-b88b497727ea2a11ce3db1f98d8d9554680e7ce320c559849ac625105175d7bc.scope: Deactivated successfully.
Nov 24 13:20:04 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:20:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:20:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:20:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:20:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:20:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:20:04 np0005533938 podman[83174]: 2025-11-24 18:20:04.896635643 +0000 UTC m=+0.085751065 container create 10bf68c28a30982a3be559035fe9897b6ce4223fe72231c3d9ff0c7d61c8d80e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:20:04 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adffdc61dcd9c365741afc9a86b89a8b8a616a00471ff1fa91fa702567cd6101/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:04 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adffdc61dcd9c365741afc9a86b89a8b8a616a00471ff1fa91fa702567cd6101/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:04 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adffdc61dcd9c365741afc9a86b89a8b8a616a00471ff1fa91fa702567cd6101/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:04 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adffdc61dcd9c365741afc9a86b89a8b8a616a00471ff1fa91fa702567cd6101/merged/var/lib/ceph/mgr/ceph-compute-0.uspkow supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:04 np0005533938 podman[83174]: 2025-11-24 18:20:04.949618566 +0000 UTC m=+0.138733958 container init 10bf68c28a30982a3be559035fe9897b6ce4223fe72231c3d9ff0c7d61c8d80e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:20:04 np0005533938 podman[83174]: 2025-11-24 18:20:04.954557483 +0000 UTC m=+0.143672875 container start 10bf68c28a30982a3be559035fe9897b6ce4223fe72231c3d9ff0c7d61c8d80e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 13:20:04 np0005533938 bash[83174]: 10bf68c28a30982a3be559035fe9897b6ce4223fe72231c3d9ff0c7d61c8d80e
Nov 24 13:20:04 np0005533938 podman[83174]: 2025-11-24 18:20:04.868308865 +0000 UTC m=+0.057424337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:04 np0005533938 systemd[1]: Started Ceph mgr.compute-0.uspkow for e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 13:20:04 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:20:05 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:05 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:20:05 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:05 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 24 13:20:05 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:05 np0005533938 ceph-mgr[75218]: [progress INFO root] complete: finished ev 31377859-8c04-4f77-a6b4-33708e64b87c (Updating mgr deployment (+1 -> 2))
Nov 24 13:20:05 np0005533938 ceph-mgr[75218]: [progress INFO root] Completed event 31377859-8c04-4f77-a6b4-33708e64b87c (Updating mgr deployment (+1 -> 2)) in 2 seconds
Nov 24 13:20:05 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 24 13:20:05 np0005533938 ceph-mgr[83194]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 13:20:05 np0005533938 ceph-mgr[83194]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 24 13:20:05 np0005533938 ceph-mgr[83194]: pidfile_write: ignore empty --pid-file
Nov 24 13:20:05 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:05 np0005533938 ceph-mgr[83194]: mgr[py] Loading python module 'alerts'
Nov 24 13:20:05 np0005533938 python3[83274]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:20:05 np0005533938 podman[83341]: 2025-11-24 18:20:05.372390605 +0000 UTC m=+0.063837513 container create 28973ca501c3d852d5344a9ac141045d7bd1c8a2d414b96ed4fc3cab849ac449 (image=quay.io/ceph/ceph:v18, name=stupefied_albattani, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:20:05 np0005533938 systemd[1]: Started libpod-conmon-28973ca501c3d852d5344a9ac141045d7bd1c8a2d414b96ed4fc3cab849ac449.scope.
Nov 24 13:20:05 np0005533938 podman[83341]: 2025-11-24 18:20:05.341460379 +0000 UTC m=+0.032907387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:20:05 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:05 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5ccee1abdeefd6c39068d5d702f801597f906455e3887ee1a7807a2622e095/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:05 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5ccee1abdeefd6c39068d5d702f801597f906455e3887ee1a7807a2622e095/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:05 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5ccee1abdeefd6c39068d5d702f801597f906455e3887ee1a7807a2622e095/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:05 np0005533938 podman[83341]: 2025-11-24 18:20:05.468862615 +0000 UTC m=+0.160309563 container init 28973ca501c3d852d5344a9ac141045d7bd1c8a2d414b96ed4fc3cab849ac449 (image=quay.io/ceph/ceph:v18, name=stupefied_albattani, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:20:05 np0005533938 podman[83341]: 2025-11-24 18:20:05.475331671 +0000 UTC m=+0.166778589 container start 28973ca501c3d852d5344a9ac141045d7bd1c8a2d414b96ed4fc3cab849ac449 (image=quay.io/ceph/ceph:v18, name=stupefied_albattani, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:20:05 np0005533938 podman[83341]: 2025-11-24 18:20:05.478872992 +0000 UTC m=+0.170319990 container attach 28973ca501c3d852d5344a9ac141045d7bd1c8a2d414b96ed4fc3cab849ac449 (image=quay.io/ceph/ceph:v18, name=stupefied_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 13:20:05 np0005533938 ceph-mgr[83194]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 13:20:05 np0005533938 ceph-mgr[83194]: mgr[py] Loading python module 'balancer'
Nov 24 13:20:05 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow[83190]: 2025-11-24T18:20:05.490+0000 7f0ddaee1140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 24 13:20:05 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2121276909' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 24 13:20:05 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:05 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:05 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:05 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:05 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:05 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow[83190]: 2025-11-24T18:20:05.738+0000 7f0ddaee1140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 13:20:05 np0005533938 ceph-mgr[83194]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 24 13:20:05 np0005533938 ceph-mgr[83194]: mgr[py] Loading python module 'cephadm'
Nov 24 13:20:05 np0005533938 podman[83502]: 2025-11-24 18:20:05.905374416 +0000 UTC m=+0.054721178 container exec 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 13:20:06 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:20:06 np0005533938 podman[83502]: 2025-11-24 18:20:06.013180518 +0000 UTC m=+0.162527250 container exec_died 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:06 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 6c3f27d5-fab9-41e9-8980-73f288c1fc58 does not exist
Nov 24 13:20:06 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 3bbdf5a1-cdc7-44a6-b413-bb1a62c7deb5 does not exist
Nov 24 13:20:06 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 70f3684e-1e94-4e4c-ba91-7cca2ed96e07 does not exist
Nov 24 13:20:06 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:06 np0005533938 ceph-mgr[75218]: [cephadm INFO root] Added host compute-0
Nov 24 13:20:06 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 24 13:20:06 np0005533938 ceph-mgr[75218]: [cephadm INFO root] Saving service mon spec with placement compute-0
Nov 24 13:20:06 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:06 np0005533938 ceph-mgr[75218]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Nov 24 13:20:06 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:06 np0005533938 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 24 13:20:06 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:20:06 np0005533938 ceph-mgr[75218]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 24 13:20:06 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 24 13:20:06 np0005533938 ceph-mgr[75218]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Nov 24 13:20:06 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Nov 24 13:20:06 np0005533938 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 24 13:20:06 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 24 13:20:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:06 np0005533938 stupefied_albattani[83398]: Added host 'compute-0' with addr '192.168.122.100'
Nov 24 13:20:06 np0005533938 stupefied_albattani[83398]: Scheduled mon update...
Nov 24 13:20:06 np0005533938 stupefied_albattani[83398]: Scheduled mgr update...
Nov 24 13:20:06 np0005533938 stupefied_albattani[83398]: Scheduled osd.default_drive_group update...
Nov 24 13:20:06 np0005533938 systemd[1]: libpod-28973ca501c3d852d5344a9ac141045d7bd1c8a2d414b96ed4fc3cab849ac449.scope: Deactivated successfully.
Nov 24 13:20:06 np0005533938 podman[83341]: 2025-11-24 18:20:06.59517795 +0000 UTC m=+1.286624888 container died 28973ca501c3d852d5344a9ac141045d7bd1c8a2d414b96ed4fc3cab849ac449 (image=quay.io/ceph/ceph:v18, name=stupefied_albattani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:20:06 np0005533938 systemd[1]: var-lib-containers-storage-overlay-ec5ccee1abdeefd6c39068d5d702f801597f906455e3887ee1a7807a2622e095-merged.mount: Deactivated successfully.
Nov 24 13:20:06 np0005533938 podman[83341]: 2025-11-24 18:20:06.657128413 +0000 UTC m=+1.348575361 container remove 28973ca501c3d852d5344a9ac141045d7bd1c8a2d414b96ed4fc3cab849ac449 (image=quay.io/ceph/ceph:v18, name=stupefied_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:20:06 np0005533938 systemd[1]: libpod-conmon-28973ca501c3d852d5344a9ac141045d7bd1c8a2d414b96ed4fc3cab849ac449.scope: Deactivated successfully.
Nov 24 13:20:07 np0005533938 podman[83914]: 2025-11-24 18:20:07.107379558 +0000 UTC m=+0.063669477 container create e1b169892b8ee50a9b1680d9b5c95ba61bd3151537b681f901c9d6792b6bd5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 13:20:07 np0005533938 systemd[1]: Started libpod-conmon-e1b169892b8ee50a9b1680d9b5c95ba61bd3151537b681f901c9d6792b6bd5e3.scope.
Nov 24 13:20:07 np0005533938 python3[83911]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:20:07 np0005533938 podman[83914]: 2025-11-24 18:20:07.075800477 +0000 UTC m=+0.032090396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:07 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:07 np0005533938 podman[83914]: 2025-11-24 18:20:07.205532642 +0000 UTC m=+0.161822541 container init e1b169892b8ee50a9b1680d9b5c95ba61bd3151537b681f901c9d6792b6bd5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jang, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:20:07 np0005533938 podman[83945]: 2025-11-24 18:20:07.209486194 +0000 UTC m=+0.038383748 container create f3f2e6d246ac757bd0ae46b321365769c2729cf17ef9bc3d464a506e54ea20c6 (image=quay.io/ceph/ceph:v18, name=blissful_ardinghelli, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:20:07 np0005533938 podman[83914]: 2025-11-24 18:20:07.212639855 +0000 UTC m=+0.168929744 container start e1b169892b8ee50a9b1680d9b5c95ba61bd3151537b681f901c9d6792b6bd5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jang, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:20:07 np0005533938 hardcore_jang[83942]: 167 167
Nov 24 13:20:07 np0005533938 systemd[1]: libpod-e1b169892b8ee50a9b1680d9b5c95ba61bd3151537b681f901c9d6792b6bd5e3.scope: Deactivated successfully.
Nov 24 13:20:07 np0005533938 podman[83914]: 2025-11-24 18:20:07.222119258 +0000 UTC m=+0.178409137 container attach e1b169892b8ee50a9b1680d9b5c95ba61bd3151537b681f901c9d6792b6bd5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:20:07 np0005533938 podman[83914]: 2025-11-24 18:20:07.222529389 +0000 UTC m=+0.178819268 container died e1b169892b8ee50a9b1680d9b5c95ba61bd3151537b681f901c9d6792b6bd5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jang, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 13:20:07 np0005533938 systemd[1]: var-lib-containers-storage-overlay-ad0caab53ef7ec768ca0c8aa68d4b9de2b14dce76fb7e6e8e62062a7c3834a58-merged.mount: Deactivated successfully.
Nov 24 13:20:07 np0005533938 podman[83914]: 2025-11-24 18:20:07.262751243 +0000 UTC m=+0.219041122 container remove e1b169892b8ee50a9b1680d9b5c95ba61bd3151537b681f901c9d6792b6bd5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jang, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:20:07 np0005533938 systemd[1]: Started libpod-conmon-f3f2e6d246ac757bd0ae46b321365769c2729cf17ef9bc3d464a506e54ea20c6.scope.
Nov 24 13:20:07 np0005533938 systemd[1]: libpod-conmon-e1b169892b8ee50a9b1680d9b5c95ba61bd3151537b681f901c9d6792b6bd5e3.scope: Deactivated successfully.
Nov 24 13:20:07 np0005533938 podman[83945]: 2025-11-24 18:20:07.191315286 +0000 UTC m=+0.020212860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:20:07 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:07 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4d0e03d0f696741878194ba23784f8e395d94fb89243ca5250e42918e7a343/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:07 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4d0e03d0f696741878194ba23784f8e395d94fb89243ca5250e42918e7a343/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:07 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4d0e03d0f696741878194ba23784f8e395d94fb89243ca5250e42918e7a343/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:07 np0005533938 podman[83945]: 2025-11-24 18:20:07.316580767 +0000 UTC m=+0.145478371 container init f3f2e6d246ac757bd0ae46b321365769c2729cf17ef9bc3d464a506e54ea20c6 (image=quay.io/ceph/ceph:v18, name=blissful_ardinghelli, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:07 np0005533938 podman[83945]: 2025-11-24 18:20:07.330442753 +0000 UTC m=+0.159340317 container start f3f2e6d246ac757bd0ae46b321365769c2729cf17ef9bc3d464a506e54ea20c6 (image=quay.io/ceph/ceph:v18, name=blissful_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:20:07 np0005533938 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.dfqptp (unknown last config time)...
Nov 24 13:20:07 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.dfqptp (unknown last config time)...
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.dfqptp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.dfqptp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 13:20:07 np0005533938 podman[83945]: 2025-11-24 18:20:07.333317747 +0000 UTC m=+0.162215301 container attach f3f2e6d246ac757bd0ae46b321365769c2729cf17ef9bc3d464a506e54ea20c6 (image=quay.io/ceph/ceph:v18, name=blissful_ardinghelli, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:20:07 np0005533938 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.dfqptp on compute-0
Nov 24 13:20:07 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.dfqptp on compute-0
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: Added host compute-0
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: Saving service mon spec with placement compute-0
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: Saving service mgr spec with placement compute-0
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: Saving service osd.default_drive_group spec with placement compute-0
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.dfqptp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 24 13:20:07 np0005533938 ceph-mgr[83194]: mgr[py] Loading python module 'crash'
Nov 24 13:20:07 np0005533938 podman[84117]: 2025-11-24 18:20:07.790337117 +0000 UTC m=+0.035967656 container create 7959061500b934909e80c0781419222df4b5cf0a1925293d28e82f4bb0924d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:20:07 np0005533938 systemd[1]: Started libpod-conmon-7959061500b934909e80c0781419222df4b5cf0a1925293d28e82f4bb0924d41.scope.
Nov 24 13:20:07 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:07 np0005533938 podman[84117]: 2025-11-24 18:20:07.85968782 +0000 UTC m=+0.105318359 container init 7959061500b934909e80c0781419222df4b5cf0a1925293d28e82f4bb0924d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_proskuriakova, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:20:07 np0005533938 podman[84117]: 2025-11-24 18:20:07.86514509 +0000 UTC m=+0.110775659 container start 7959061500b934909e80c0781419222df4b5cf0a1925293d28e82f4bb0924d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:20:07 np0005533938 laughing_proskuriakova[84133]: 167 167
Nov 24 13:20:07 np0005533938 systemd[1]: libpod-7959061500b934909e80c0781419222df4b5cf0a1925293d28e82f4bb0924d41.scope: Deactivated successfully.
Nov 24 13:20:07 np0005533938 conmon[84133]: conmon 7959061500b934909e80 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7959061500b934909e80c0781419222df4b5cf0a1925293d28e82f4bb0924d41.scope/container/memory.events
Nov 24 13:20:07 np0005533938 podman[84117]: 2025-11-24 18:20:07.869156223 +0000 UTC m=+0.114786762 container attach 7959061500b934909e80c0781419222df4b5cf0a1925293d28e82f4bb0924d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_proskuriakova, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:20:07 np0005533938 podman[84117]: 2025-11-24 18:20:07.870132398 +0000 UTC m=+0.115762927 container died 7959061500b934909e80c0781419222df4b5cf0a1925293d28e82f4bb0924d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_proskuriakova, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 13:20:07 np0005533938 podman[84117]: 2025-11-24 18:20:07.774169101 +0000 UTC m=+0.019799680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:07 np0005533938 systemd[1]: var-lib-containers-storage-overlay-cf0745fc83558bf80f00c6ad14a6a5f90760cd19f33e933f4ea8597e0c99878e-merged.mount: Deactivated successfully.
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 24 13:20:07 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/748237275' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 13:20:07 np0005533938 blissful_ardinghelli[83977]: 
Nov 24 13:20:07 np0005533938 blissful_ardinghelli[83977]: {"fsid":"e5ee928f-099b-569b-93c9-ecf025cbb50d","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":80,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-24T18:18:44.978620+0000","services":{}},"progress_events":{}}
Nov 24 13:20:07 np0005533938 systemd[1]: libpod-f3f2e6d246ac757bd0ae46b321365769c2729cf17ef9bc3d464a506e54ea20c6.scope: Deactivated successfully.
Nov 24 13:20:07 np0005533938 ceph-mgr[83194]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 13:20:07 np0005533938 ceph-mgr[83194]: mgr[py] Loading python module 'dashboard'
Nov 24 13:20:07 np0005533938 conmon[83977]: conmon f3f2e6d246ac757bd0ae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f3f2e6d246ac757bd0ae46b321365769c2729cf17ef9bc3d464a506e54ea20c6.scope/container/memory.events
Nov 24 13:20:07 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow[83190]: 2025-11-24T18:20:07.940+0000 7f0ddaee1140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 24 13:20:07 np0005533938 podman[84117]: 2025-11-24 18:20:07.951551671 +0000 UTC m=+0.197182220 container remove 7959061500b934909e80c0781419222df4b5cf0a1925293d28e82f4bb0924d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_proskuriakova, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 13:20:07 np0005533938 podman[83945]: 2025-11-24 18:20:07.961156088 +0000 UTC m=+0.790053652 container died f3f2e6d246ac757bd0ae46b321365769c2729cf17ef9bc3d464a506e54ea20c6 (image=quay.io/ceph/ceph:v18, name=blissful_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 13:20:07 np0005533938 systemd[1]: libpod-conmon-7959061500b934909e80c0781419222df4b5cf0a1925293d28e82f4bb0924d41.scope: Deactivated successfully.
Nov 24 13:20:08 np0005533938 systemd[1]: var-lib-containers-storage-overlay-0b4d0e03d0f696741878194ba23784f8e395d94fb89243ca5250e42918e7a343-merged.mount: Deactivated successfully.
Nov 24 13:20:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:20:08 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:20:08 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:08 np0005533938 podman[83945]: 2025-11-24 18:20:08.040429036 +0000 UTC m=+0.869326600 container remove f3f2e6d246ac757bd0ae46b321365769c2729cf17ef9bc3d464a506e54ea20c6 (image=quay.io/ceph/ceph:v18, name=blissful_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:20:08 np0005533938 systemd[1]: libpod-conmon-f3f2e6d246ac757bd0ae46b321365769c2729cf17ef9bc3d464a506e54ea20c6.scope: Deactivated successfully.
Nov 24 13:20:08 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:20:08 np0005533938 ceph-mon[74927]: Reconfiguring mgr.compute-0.dfqptp (unknown last config time)...
Nov 24 13:20:08 np0005533938 ceph-mon[74927]: Reconfiguring daemon mgr.compute-0.dfqptp on compute-0
Nov 24 13:20:08 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:08 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:09 np0005533938 podman[84337]: 2025-11-24 18:20:09.011783499 +0000 UTC m=+0.087977323 container exec 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:20:09 np0005533938 podman[84337]: 2025-11-24 18:20:09.146534213 +0000 UTC m=+0.222727997 container exec_died 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 13:20:09 np0005533938 ceph-mgr[83194]: mgr[py] Loading python module 'devicehealth'
Nov 24 13:20:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:20:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:20:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:20:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:20:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:20:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:20:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:20:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:20:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:20:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:09 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 65dbbbac-f8bf-4631-b7da-eeb20d52b674 does not exist
Nov 24 13:20:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 24 13:20:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:09 np0005533938 ceph-mgr[75218]: [progress INFO root] update: starting ev 2556767f-a180-4740-8216-e5585c25e697 (Updating mgr deployment (-1 -> 1))
Nov 24 13:20:09 np0005533938 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.uspkow from compute-0 -- ports [8765]
Nov 24 13:20:09 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.uspkow from compute-0 -- ports [8765]
Nov 24 13:20:09 np0005533938 ceph-mgr[83194]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 13:20:09 np0005533938 ceph-mgr[83194]: mgr[py] Loading python module 'diskprediction_local'
Nov 24 13:20:09 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow[83190]: 2025-11-24T18:20:09.533+0000 7f0ddaee1140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 24 13:20:09 np0005533938 ceph-mgr[75218]: [progress INFO root] Writing back 2 completed events
Nov 24 13:20:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 24 13:20:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:10 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:10 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:10 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:10 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:10 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:20:10 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:10 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:10 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:10 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow[83190]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 24 13:20:10 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow[83190]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 24 13:20:10 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow[83190]:  from numpy import show_config as show_numpy_config
Nov 24 13:20:10 np0005533938 ceph-mgr[83194]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 13:20:10 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow[83190]: 2025-11-24T18:20:10.046+0000 7f0ddaee1140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 24 13:20:10 np0005533938 ceph-mgr[83194]: mgr[py] Loading python module 'influx'
Nov 24 13:20:10 np0005533938 systemd[1]: Stopping Ceph mgr.compute-0.uspkow for e5ee928f-099b-569b-93c9-ecf025cbb50d...
Nov 24 13:20:10 np0005533938 ceph-mgr[83194]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 13:20:10 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow[83190]: 2025-11-24T18:20:10.273+0000 7f0ddaee1140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 24 13:20:10 np0005533938 ceph-mgr[83194]: mgr[py] Loading python module 'insights'
Nov 24 13:20:10 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:20:10 np0005533938 podman[84591]: 2025-11-24 18:20:10.462705469 +0000 UTC m=+0.102403063 container died 10bf68c28a30982a3be559035fe9897b6ce4223fe72231c3d9ff0c7d61c8d80e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:20:10 np0005533938 systemd[1]: var-lib-containers-storage-overlay-adffdc61dcd9c365741afc9a86b89a8b8a616a00471ff1fa91fa702567cd6101-merged.mount: Deactivated successfully.
Nov 24 13:20:10 np0005533938 podman[84591]: 2025-11-24 18:20:10.550809954 +0000 UTC m=+0.190507518 container remove 10bf68c28a30982a3be559035fe9897b6ce4223fe72231c3d9ff0c7d61c8d80e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 13:20:10 np0005533938 bash[84591]: ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-uspkow
Nov 24 13:20:10 np0005533938 systemd[1]: ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d@mgr.compute-0.uspkow.service: Main process exited, code=exited, status=143/n/a
Nov 24 13:20:10 np0005533938 systemd[1]: ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d@mgr.compute-0.uspkow.service: Failed with result 'exit-code'.
Nov 24 13:20:10 np0005533938 systemd[1]: Stopped Ceph mgr.compute-0.uspkow for e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 13:20:10 np0005533938 systemd[1]: ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d@mgr.compute-0.uspkow.service: Consumed 6.354s CPU time.
Nov 24 13:20:10 np0005533938 systemd[1]: Reloading.
Nov 24 13:20:10 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:20:10 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:20:11 np0005533938 ceph-mon[74927]: Removing daemon mgr.compute-0.uspkow from compute-0 -- ports [8765]
Nov 24 13:20:11 np0005533938 ceph-mgr[75218]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.uspkow
Nov 24 13:20:11 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.uspkow
Nov 24 13:20:11 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.uspkow"} v 0) v1
Nov 24 13:20:11 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.uspkow"}]: dispatch
Nov 24 13:20:11 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.uspkow"}]': finished
Nov 24 13:20:11 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 24 13:20:11 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:11 np0005533938 ceph-mgr[75218]: [progress INFO root] complete: finished ev 2556767f-a180-4740-8216-e5585c25e697 (Updating mgr deployment (-1 -> 1))
Nov 24 13:20:11 np0005533938 ceph-mgr[75218]: [progress INFO root] Completed event 2556767f-a180-4740-8216-e5585c25e697 (Updating mgr deployment (-1 -> 1)) in 2 seconds
Nov 24 13:20:11 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 24 13:20:11 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:11 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 94bd629f-1ccd-46a3-a3d1-48a51d23cf59 does not exist
Nov 24 13:20:11 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:20:11 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:20:11 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:20:11 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:20:11 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:20:11 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:20:11 np0005533938 podman[84834]: 2025-11-24 18:20:11.84201669 +0000 UTC m=+0.041357594 container create bcf01150f7222120cf026f892da6eefa8b2594d1ebde5b33945c5b3abc9a32be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cohen, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 13:20:11 np0005533938 systemd[1]: Started libpod-conmon-bcf01150f7222120cf026f892da6eefa8b2594d1ebde5b33945c5b3abc9a32be.scope.
Nov 24 13:20:11 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:11 np0005533938 podman[84834]: 2025-11-24 18:20:11.825221338 +0000 UTC m=+0.024562272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:11 np0005533938 podman[84834]: 2025-11-24 18:20:11.934767795 +0000 UTC m=+0.134108699 container init bcf01150f7222120cf026f892da6eefa8b2594d1ebde5b33945c5b3abc9a32be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:20:11 np0005533938 podman[84834]: 2025-11-24 18:20:11.943568001 +0000 UTC m=+0.142908955 container start bcf01150f7222120cf026f892da6eefa8b2594d1ebde5b33945c5b3abc9a32be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:20:11 np0005533938 podman[84834]: 2025-11-24 18:20:11.947693577 +0000 UTC m=+0.147034521 container attach bcf01150f7222120cf026f892da6eefa8b2594d1ebde5b33945c5b3abc9a32be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:20:11 np0005533938 epic_cohen[84850]: 167 167
Nov 24 13:20:11 np0005533938 systemd[1]: libpod-bcf01150f7222120cf026f892da6eefa8b2594d1ebde5b33945c5b3abc9a32be.scope: Deactivated successfully.
Nov 24 13:20:11 np0005533938 conmon[84850]: conmon bcf01150f7222120cf02 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bcf01150f7222120cf026f892da6eefa8b2594d1ebde5b33945c5b3abc9a32be.scope/container/memory.events
Nov 24 13:20:11 np0005533938 podman[84834]: 2025-11-24 18:20:11.953927367 +0000 UTC m=+0.153268271 container died bcf01150f7222120cf026f892da6eefa8b2594d1ebde5b33945c5b3abc9a32be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cohen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 13:20:11 np0005533938 systemd[1]: var-lib-containers-storage-overlay-bc8618a5aba2d8a6830d5c9f432eac3ac5cef25a4e7df0e8c7c5ec17f62960f3-merged.mount: Deactivated successfully.
Nov 24 13:20:11 np0005533938 podman[84834]: 2025-11-24 18:20:11.994257954 +0000 UTC m=+0.193598868 container remove bcf01150f7222120cf026f892da6eefa8b2594d1ebde5b33945c5b3abc9a32be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cohen, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 13:20:12 np0005533938 systemd[1]: libpod-conmon-bcf01150f7222120cf026f892da6eefa8b2594d1ebde5b33945c5b3abc9a32be.scope: Deactivated successfully.
Nov 24 13:20:12 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.uspkow"}]: dispatch
Nov 24 13:20:12 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.uspkow"}]': finished
Nov 24 13:20:12 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:12 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:12 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:20:12 np0005533938 podman[84872]: 2025-11-24 18:20:12.178277825 +0000 UTC m=+0.041881578 container create 88b18c17768c2c5db871dc67d5a25fa2fd9e8709905d1dcbe5207f2cf6826ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:20:12 np0005533938 systemd[1]: Started libpod-conmon-88b18c17768c2c5db871dc67d5a25fa2fd9e8709905d1dcbe5207f2cf6826ed3.scope.
Nov 24 13:20:12 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:12 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77de3137e9cee6f69567c41c423164b2b9265e090a7ea7ca06f9540e2561a1ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:12 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77de3137e9cee6f69567c41c423164b2b9265e090a7ea7ca06f9540e2561a1ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:12 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77de3137e9cee6f69567c41c423164b2b9265e090a7ea7ca06f9540e2561a1ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:12 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77de3137e9cee6f69567c41c423164b2b9265e090a7ea7ca06f9540e2561a1ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:12 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77de3137e9cee6f69567c41c423164b2b9265e090a7ea7ca06f9540e2561a1ec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:12 np0005533938 podman[84872]: 2025-11-24 18:20:12.161415921 +0000 UTC m=+0.025019684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:12 np0005533938 podman[84872]: 2025-11-24 18:20:12.256976388 +0000 UTC m=+0.120580151 container init 88b18c17768c2c5db871dc67d5a25fa2fd9e8709905d1dcbe5207f2cf6826ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elion, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 13:20:12 np0005533938 podman[84872]: 2025-11-24 18:20:12.264978494 +0000 UTC m=+0.128582267 container start 88b18c17768c2c5db871dc67d5a25fa2fd9e8709905d1dcbe5207f2cf6826ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elion, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:20:12 np0005533938 podman[84872]: 2025-11-24 18:20:12.268754661 +0000 UTC m=+0.132358424 container attach 88b18c17768c2c5db871dc67d5a25fa2fd9e8709905d1dcbe5207f2cf6826ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elion, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 13:20:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:20:12 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:20:13 np0005533938 ceph-mon[74927]: Removing key for mgr.compute-0.uspkow
Nov 24 13:20:13 np0005533938 competent_elion[84888]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:20:13 np0005533938 competent_elion[84888]: --> relative data size: 1.0
Nov 24 13:20:13 np0005533938 competent_elion[84888]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 13:20:13 np0005533938 competent_elion[84888]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 1f8f8fab-5f72-4f8f-b22f-80baf46bd30b
Nov 24 13:20:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b"} v 0) v1
Nov 24 13:20:13 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/689176563' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b"}]: dispatch
Nov 24 13:20:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 24 13:20:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 13:20:13 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/689176563' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b"}]': finished
Nov 24 13:20:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 24 13:20:13 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 24 13:20:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 13:20:13 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 13:20:13 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 13:20:13 np0005533938 competent_elion[84888]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 13:20:13 np0005533938 lvm[84949]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 13:20:13 np0005533938 lvm[84949]: VG ceph_vg0 finished
Nov 24 13:20:13 np0005533938 competent_elion[84888]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 24 13:20:13 np0005533938 competent_elion[84888]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 24 13:20:13 np0005533938 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 24 13:20:13 np0005533938 competent_elion[84888]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 24 13:20:13 np0005533938 competent_elion[84888]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 24 13:20:14 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/689176563' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b"}]: dispatch
Nov 24 13:20:14 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/689176563' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b"}]': finished
Nov 24 13:20:14 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 24 13:20:14 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2593176764' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 24 13:20:14 np0005533938 competent_elion[84888]: stderr: got monmap epoch 1
Nov 24 13:20:14 np0005533938 competent_elion[84888]: --> Creating keyring file for osd.0
Nov 24 13:20:14 np0005533938 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 24 13:20:14 np0005533938 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 24 13:20:14 np0005533938 competent_elion[84888]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 1f8f8fab-5f72-4f8f-b22f-80baf46bd30b --setuser ceph --setgroup ceph
Nov 24 13:20:14 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:20:14 np0005533938 ceph-mgr[75218]: [progress INFO root] Writing back 3 completed events
Nov 24 13:20:14 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 24 13:20:14 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:15 np0005533938 ceph-mon[74927]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 24 13:20:15 np0005533938 ceph-mon[74927]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 24 13:20:15 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:16 np0005533938 ceph-mon[74927]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 24 13:20:16 np0005533938 ceph-mon[74927]: Cluster is now healthy
Nov 24 13:20:16 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:20:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:20:17 np0005533938 competent_elion[84888]: stderr: 2025-11-24T18:20:14.434+0000 7f4e60011740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 13:20:17 np0005533938 competent_elion[84888]: stderr: 2025-11-24T18:20:14.434+0000 7f4e60011740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 13:20:17 np0005533938 competent_elion[84888]: stderr: 2025-11-24T18:20:14.434+0000 7f4e60011740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 13:20:17 np0005533938 competent_elion[84888]: stderr: 2025-11-24T18:20:14.434+0000 7f4e60011740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 24 13:20:17 np0005533938 competent_elion[84888]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 24 13:20:17 np0005533938 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 24 13:20:17 np0005533938 competent_elion[84888]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 24 13:20:17 np0005533938 competent_elion[84888]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 24 13:20:17 np0005533938 competent_elion[84888]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 24 13:20:17 np0005533938 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 24 13:20:17 np0005533938 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 24 13:20:17 np0005533938 competent_elion[84888]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 24 13:20:17 np0005533938 competent_elion[84888]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 24 13:20:17 np0005533938 competent_elion[84888]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 13:20:17 np0005533938 competent_elion[84888]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 79b9678c-793a-417c-9179-1829e79d1a19
Nov 24 13:20:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "79b9678c-793a-417c-9179-1829e79d1a19"} v 0) v1
Nov 24 13:20:18 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2438235050' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "79b9678c-793a-417c-9179-1829e79d1a19"}]: dispatch
Nov 24 13:20:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 24 13:20:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 13:20:18 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2438235050' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "79b9678c-793a-417c-9179-1829e79d1a19"}]': finished
Nov 24 13:20:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 24 13:20:18 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 24 13:20:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 13:20:18 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 13:20:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 13:20:18 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 13:20:18 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 13:20:18 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 13:20:18 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2438235050' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "79b9678c-793a-417c-9179-1829e79d1a19"}]: dispatch
Nov 24 13:20:18 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2438235050' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "79b9678c-793a-417c-9179-1829e79d1a19"}]': finished
Nov 24 13:20:18 np0005533938 lvm[85883]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 24 13:20:18 np0005533938 lvm[85883]: VG ceph_vg1 finished
Nov 24 13:20:18 np0005533938 competent_elion[84888]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 13:20:18 np0005533938 competent_elion[84888]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 24 13:20:18 np0005533938 competent_elion[84888]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Nov 24 13:20:18 np0005533938 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 24 13:20:18 np0005533938 competent_elion[84888]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 24 13:20:18 np0005533938 competent_elion[84888]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 24 13:20:18 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:20:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 24 13:20:18 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/460886543' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 24 13:20:18 np0005533938 competent_elion[84888]: stderr: got monmap epoch 1
Nov 24 13:20:18 np0005533938 competent_elion[84888]: --> Creating keyring file for osd.1
Nov 24 13:20:18 np0005533938 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 24 13:20:18 np0005533938 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 24 13:20:18 np0005533938 competent_elion[84888]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 79b9678c-793a-417c-9179-1829e79d1a19 --setuser ceph --setgroup ceph
Nov 24 13:20:20 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:20:21 np0005533938 competent_elion[84888]: stderr: 2025-11-24T18:20:18.842+0000 7fac60dc6740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 13:20:21 np0005533938 competent_elion[84888]: stderr: 2025-11-24T18:20:18.842+0000 7fac60dc6740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 13:20:21 np0005533938 competent_elion[84888]: stderr: 2025-11-24T18:20:18.842+0000 7fac60dc6740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 13:20:21 np0005533938 competent_elion[84888]: stderr: 2025-11-24T18:20:18.842+0000 7fac60dc6740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 24 13:20:21 np0005533938 competent_elion[84888]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Nov 24 13:20:21 np0005533938 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 13:20:21 np0005533938 competent_elion[84888]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 24 13:20:21 np0005533938 competent_elion[84888]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 24 13:20:21 np0005533938 competent_elion[84888]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 24 13:20:21 np0005533938 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 24 13:20:21 np0005533938 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 13:20:21 np0005533938 competent_elion[84888]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 24 13:20:21 np0005533938 competent_elion[84888]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Nov 24 13:20:21 np0005533938 competent_elion[84888]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 13:20:21 np0005533938 competent_elion[84888]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new d6904eab-3369-4532-8b99-18f2965a8556
Nov 24 13:20:21 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "d6904eab-3369-4532-8b99-18f2965a8556"} v 0) v1
Nov 24 13:20:21 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3989597058' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d6904eab-3369-4532-8b99-18f2965a8556"}]: dispatch
Nov 24 13:20:21 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 24 13:20:21 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 13:20:21 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3989597058' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d6904eab-3369-4532-8b99-18f2965a8556"}]': finished
Nov 24 13:20:21 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Nov 24 13:20:21 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Nov 24 13:20:21 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 13:20:21 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 13:20:21 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 13:20:21 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 13:20:21 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 13:20:21 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 13:20:21 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 13:20:21 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 13:20:21 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 13:20:22 np0005533938 lvm[86821]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 24 13:20:22 np0005533938 lvm[86821]: VG ceph_vg2 finished
Nov 24 13:20:22 np0005533938 competent_elion[84888]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 24 13:20:22 np0005533938 competent_elion[84888]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Nov 24 13:20:22 np0005533938 competent_elion[84888]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Nov 24 13:20:22 np0005533938 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 24 13:20:22 np0005533938 competent_elion[84888]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 24 13:20:22 np0005533938 competent_elion[84888]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Nov 24 13:20:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:20:22 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:20:22 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/3989597058' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d6904eab-3369-4532-8b99-18f2965a8556"}]: dispatch
Nov 24 13:20:22 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/3989597058' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d6904eab-3369-4532-8b99-18f2965a8556"}]': finished
Nov 24 13:20:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 24 13:20:22 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3389139640' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 24 13:20:22 np0005533938 competent_elion[84888]: stderr: got monmap epoch 1
Nov 24 13:20:22 np0005533938 competent_elion[84888]: --> Creating keyring file for osd.2
Nov 24 13:20:22 np0005533938 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Nov 24 13:20:22 np0005533938 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Nov 24 13:20:22 np0005533938 competent_elion[84888]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid d6904eab-3369-4532-8b99-18f2965a8556 --setuser ceph --setgroup ceph
Nov 24 13:20:24 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:20:25 np0005533938 competent_elion[84888]: stderr: 2025-11-24T18:20:22.666+0000 7fa747aff740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 13:20:25 np0005533938 competent_elion[84888]: stderr: 2025-11-24T18:20:22.666+0000 7fa747aff740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 13:20:25 np0005533938 competent_elion[84888]: stderr: 2025-11-24T18:20:22.666+0000 7fa747aff740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 24 13:20:25 np0005533938 competent_elion[84888]: stderr: 2025-11-24T18:20:22.666+0000 7fa747aff740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Nov 24 13:20:25 np0005533938 competent_elion[84888]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Nov 24 13:20:25 np0005533938 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 24 13:20:25 np0005533938 competent_elion[84888]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 24 13:20:25 np0005533938 competent_elion[84888]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 24 13:20:25 np0005533938 competent_elion[84888]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 24 13:20:25 np0005533938 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 24 13:20:25 np0005533938 competent_elion[84888]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 24 13:20:25 np0005533938 competent_elion[84888]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 24 13:20:25 np0005533938 competent_elion[84888]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Nov 24 13:20:25 np0005533938 systemd[1]: libpod-88b18c17768c2c5db871dc67d5a25fa2fd9e8709905d1dcbe5207f2cf6826ed3.scope: Deactivated successfully.
Nov 24 13:20:25 np0005533938 systemd[1]: libpod-88b18c17768c2c5db871dc67d5a25fa2fd9e8709905d1dcbe5207f2cf6826ed3.scope: Consumed 5.914s CPU time.
Nov 24 13:20:25 np0005533938 podman[84872]: 2025-11-24 18:20:25.251107959 +0000 UTC m=+13.114711712 container died 88b18c17768c2c5db871dc67d5a25fa2fd9e8709905d1dcbe5207f2cf6826ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elion, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 24 13:20:25 np0005533938 systemd[1]: var-lib-containers-storage-overlay-77de3137e9cee6f69567c41c423164b2b9265e090a7ea7ca06f9540e2561a1ec-merged.mount: Deactivated successfully.
Nov 24 13:20:25 np0005533938 podman[84872]: 2025-11-24 18:20:25.326817445 +0000 UTC m=+13.190421188 container remove 88b18c17768c2c5db871dc67d5a25fa2fd9e8709905d1dcbe5207f2cf6826ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 13:20:25 np0005533938 systemd[1]: libpod-conmon-88b18c17768c2c5db871dc67d5a25fa2fd9e8709905d1dcbe5207f2cf6826ed3.scope: Deactivated successfully.
Nov 24 13:20:26 np0005533938 podman[87878]: 2025-11-24 18:20:26.023366793 +0000 UTC m=+0.056594626 container create a956892acbe362964e894a1e3c26cc0968d2e355bf140a55d95902cbaee52dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:20:26 np0005533938 systemd[1]: Started libpod-conmon-a956892acbe362964e894a1e3c26cc0968d2e355bf140a55d95902cbaee52dd0.scope.
Nov 24 13:20:26 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:26 np0005533938 podman[87878]: 2025-11-24 18:20:26.001286786 +0000 UTC m=+0.034514639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:26 np0005533938 podman[87878]: 2025-11-24 18:20:26.110434912 +0000 UTC m=+0.143662785 container init a956892acbe362964e894a1e3c26cc0968d2e355bf140a55d95902cbaee52dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_raman, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 13:20:26 np0005533938 podman[87878]: 2025-11-24 18:20:26.117604126 +0000 UTC m=+0.150831979 container start a956892acbe362964e894a1e3c26cc0968d2e355bf140a55d95902cbaee52dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_raman, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 13:20:26 np0005533938 podman[87878]: 2025-11-24 18:20:26.121571498 +0000 UTC m=+0.154799401 container attach a956892acbe362964e894a1e3c26cc0968d2e355bf140a55d95902cbaee52dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_raman, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 13:20:26 np0005533938 suspicious_raman[87894]: 167 167
Nov 24 13:20:26 np0005533938 systemd[1]: libpod-a956892acbe362964e894a1e3c26cc0968d2e355bf140a55d95902cbaee52dd0.scope: Deactivated successfully.
Nov 24 13:20:26 np0005533938 podman[87878]: 2025-11-24 18:20:26.124118553 +0000 UTC m=+0.157346386 container died a956892acbe362964e894a1e3c26cc0968d2e355bf140a55d95902cbaee52dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_raman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:20:26 np0005533938 systemd[1]: var-lib-containers-storage-overlay-5a02e8cd951dd24155a4356d0f2b590c2bdfaa56637503df96f156b65089185a-merged.mount: Deactivated successfully.
Nov 24 13:20:26 np0005533938 podman[87878]: 2025-11-24 18:20:26.160352665 +0000 UTC m=+0.193580488 container remove a956892acbe362964e894a1e3c26cc0968d2e355bf140a55d95902cbaee52dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_raman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 13:20:26 np0005533938 systemd[1]: libpod-conmon-a956892acbe362964e894a1e3c26cc0968d2e355bf140a55d95902cbaee52dd0.scope: Deactivated successfully.
Nov 24 13:20:26 np0005533938 podman[87918]: 2025-11-24 18:20:26.407980611 +0000 UTC m=+0.069424976 container create 104d1b75c0253336176a85ffa5e47ded951ecb11845fd2951a1c7cbc9b6d98ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:20:26 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:20:26 np0005533938 systemd[1]: Started libpod-conmon-104d1b75c0253336176a85ffa5e47ded951ecb11845fd2951a1c7cbc9b6d98ad.scope.
Nov 24 13:20:26 np0005533938 podman[87918]: 2025-11-24 18:20:26.3822565 +0000 UTC m=+0.043700865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:26 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:26 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a72f9a5707404ed9ccb1440589838a7119828b9754f7e9427d6fd2ebd3ebd6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:26 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a72f9a5707404ed9ccb1440589838a7119828b9754f7e9427d6fd2ebd3ebd6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:26 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a72f9a5707404ed9ccb1440589838a7119828b9754f7e9427d6fd2ebd3ebd6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:26 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a72f9a5707404ed9ccb1440589838a7119828b9754f7e9427d6fd2ebd3ebd6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:26 np0005533938 podman[87918]: 2025-11-24 18:20:26.527623137 +0000 UTC m=+0.189067492 container init 104d1b75c0253336176a85ffa5e47ded951ecb11845fd2951a1c7cbc9b6d98ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 24 13:20:26 np0005533938 podman[87918]: 2025-11-24 18:20:26.537528562 +0000 UTC m=+0.198972897 container start 104d1b75c0253336176a85ffa5e47ded951ecb11845fd2951a1c7cbc9b6d98ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jepsen, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:20:26 np0005533938 podman[87918]: 2025-11-24 18:20:26.540735824 +0000 UTC m=+0.202180179 container attach 104d1b75c0253336176a85ffa5e47ded951ecb11845fd2951a1c7cbc9b6d98ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jepsen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]: {
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:    "0": [
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:        {
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "devices": [
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "/dev/loop3"
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            ],
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "lv_name": "ceph_lv0",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "lv_size": "21470642176",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "name": "ceph_lv0",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "tags": {
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.cluster_name": "ceph",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.crush_device_class": "",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.encrypted": "0",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.osd_id": "0",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.type": "block",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.vdo": "0"
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            },
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "type": "block",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "vg_name": "ceph_vg0"
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:        }
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:    ],
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:    "1": [
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:        {
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "devices": [
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "/dev/loop4"
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            ],
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "lv_name": "ceph_lv1",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "lv_size": "21470642176",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "name": "ceph_lv1",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "tags": {
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.cluster_name": "ceph",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.crush_device_class": "",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.encrypted": "0",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.osd_id": "1",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.type": "block",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.vdo": "0"
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            },
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "type": "block",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "vg_name": "ceph_vg1"
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:        }
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:    ],
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:    "2": [
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:        {
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "devices": [
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "/dev/loop5"
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            ],
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "lv_name": "ceph_lv2",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "lv_size": "21470642176",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "name": "ceph_lv2",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "tags": {
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.cluster_name": "ceph",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.crush_device_class": "",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.encrypted": "0",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.osd_id": "2",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.type": "block",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:                "ceph.vdo": "0"
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            },
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "type": "block",
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:            "vg_name": "ceph_vg2"
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:        }
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]:    ]
Nov 24 13:20:27 np0005533938 kind_jepsen[87935]: }
Nov 24 13:20:27 np0005533938 systemd[1]: libpod-104d1b75c0253336176a85ffa5e47ded951ecb11845fd2951a1c7cbc9b6d98ad.scope: Deactivated successfully.
Nov 24 13:20:27 np0005533938 podman[87918]: 2025-11-24 18:20:27.325145531 +0000 UTC m=+0.986589956 container died 104d1b75c0253336176a85ffa5e47ded951ecb11845fd2951a1c7cbc9b6d98ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:20:27 np0005533938 systemd[1]: var-lib-containers-storage-overlay-8a72f9a5707404ed9ccb1440589838a7119828b9754f7e9427d6fd2ebd3ebd6d-merged.mount: Deactivated successfully.
Nov 24 13:20:27 np0005533938 podman[87918]: 2025-11-24 18:20:27.382682234 +0000 UTC m=+1.044126569 container remove 104d1b75c0253336176a85ffa5e47ded951ecb11845fd2951a1c7cbc9b6d98ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:20:27 np0005533938 systemd[1]: libpod-conmon-104d1b75c0253336176a85ffa5e47ded951ecb11845fd2951a1c7cbc9b6d98ad.scope: Deactivated successfully.
Nov 24 13:20:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 24 13:20:27 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 24 13:20:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:20:27 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:20:27 np0005533938 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Nov 24 13:20:27 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Nov 24 13:20:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:20:27 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 24 13:20:28 np0005533938 podman[88096]: 2025-11-24 18:20:28.100747719 +0000 UTC m=+0.037830326 container create e39d70a8396212e99d9c64d0ed36f11c4102f44c946bd88a8a56a0ee50b9d53a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:20:28 np0005533938 systemd[1]: Started libpod-conmon-e39d70a8396212e99d9c64d0ed36f11c4102f44c946bd88a8a56a0ee50b9d53a.scope.
Nov 24 13:20:28 np0005533938 podman[88096]: 2025-11-24 18:20:28.084651143 +0000 UTC m=+0.021733760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:28 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:28 np0005533938 podman[88096]: 2025-11-24 18:20:28.201150085 +0000 UTC m=+0.138232722 container init e39d70a8396212e99d9c64d0ed36f11c4102f44c946bd88a8a56a0ee50b9d53a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goldstine, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:20:28 np0005533938 podman[88096]: 2025-11-24 18:20:28.21482608 +0000 UTC m=+0.151908677 container start e39d70a8396212e99d9c64d0ed36f11c4102f44c946bd88a8a56a0ee50b9d53a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:20:28 np0005533938 podman[88096]: 2025-11-24 18:20:28.219342584 +0000 UTC m=+0.156425231 container attach e39d70a8396212e99d9c64d0ed36f11c4102f44c946bd88a8a56a0ee50b9d53a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:20:28 np0005533938 hopeful_goldstine[88112]: 167 167
Nov 24 13:20:28 np0005533938 systemd[1]: libpod-e39d70a8396212e99d9c64d0ed36f11c4102f44c946bd88a8a56a0ee50b9d53a.scope: Deactivated successfully.
Nov 24 13:20:28 np0005533938 podman[88096]: 2025-11-24 18:20:28.221864508 +0000 UTC m=+0.158947135 container died e39d70a8396212e99d9c64d0ed36f11c4102f44c946bd88a8a56a0ee50b9d53a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goldstine, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 13:20:28 np0005533938 systemd[1]: var-lib-containers-storage-overlay-f6953e5998dcb0509b60e291de41e43eab87ebdec19a79900b876136ecf81b44-merged.mount: Deactivated successfully.
Nov 24 13:20:28 np0005533938 podman[88096]: 2025-11-24 18:20:28.264562436 +0000 UTC m=+0.201645073 container remove e39d70a8396212e99d9c64d0ed36f11c4102f44c946bd88a8a56a0ee50b9d53a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:20:28 np0005533938 systemd[1]: libpod-conmon-e39d70a8396212e99d9c64d0ed36f11c4102f44c946bd88a8a56a0ee50b9d53a.scope: Deactivated successfully.
Nov 24 13:20:28 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:20:28 np0005533938 ceph-mon[74927]: Deploying daemon osd.0 on compute-0
Nov 24 13:20:28 np0005533938 podman[88143]: 2025-11-24 18:20:28.606447531 +0000 UTC m=+0.062961531 container create 8325d1650e98539ffb6fc427e6b24cbff638ef4620338ca5aab75d67ba07be9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate-test, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:20:28 np0005533938 systemd[1]: Started libpod-conmon-8325d1650e98539ffb6fc427e6b24cbff638ef4620338ca5aab75d67ba07be9c.scope.
Nov 24 13:20:28 np0005533938 podman[88143]: 2025-11-24 18:20:28.586470656 +0000 UTC m=+0.042984706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:28 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:28 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0e73d32a09b0574fe80a51fa7600e29037095c8650082785ef4e8d311c1abf2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:28 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0e73d32a09b0574fe80a51fa7600e29037095c8650082785ef4e8d311c1abf2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:28 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0e73d32a09b0574fe80a51fa7600e29037095c8650082785ef4e8d311c1abf2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:28 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0e73d32a09b0574fe80a51fa7600e29037095c8650082785ef4e8d311c1abf2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:28 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0e73d32a09b0574fe80a51fa7600e29037095c8650082785ef4e8d311c1abf2/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:28 np0005533938 podman[88143]: 2025-11-24 18:20:28.720502522 +0000 UTC m=+0.177016562 container init 8325d1650e98539ffb6fc427e6b24cbff638ef4620338ca5aab75d67ba07be9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:20:28 np0005533938 podman[88143]: 2025-11-24 18:20:28.737334447 +0000 UTC m=+0.193848477 container start 8325d1650e98539ffb6fc427e6b24cbff638ef4620338ca5aab75d67ba07be9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate-test, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:20:28 np0005533938 podman[88143]: 2025-11-24 18:20:28.742238741 +0000 UTC m=+0.198752771 container attach 8325d1650e98539ffb6fc427e6b24cbff638ef4620338ca5aab75d67ba07be9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate-test, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:20:29 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate-test[88160]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 24 13:20:29 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate-test[88160]:                            [--no-systemd] [--no-tmpfs]
Nov 24 13:20:29 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate-test[88160]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 24 13:20:29 np0005533938 systemd[1]: libpod-8325d1650e98539ffb6fc427e6b24cbff638ef4620338ca5aab75d67ba07be9c.scope: Deactivated successfully.
Nov 24 13:20:29 np0005533938 podman[88143]: 2025-11-24 18:20:29.419643879 +0000 UTC m=+0.876157889 container died 8325d1650e98539ffb6fc427e6b24cbff638ef4620338ca5aab75d67ba07be9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate-test, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:20:29 np0005533938 systemd[1]: var-lib-containers-storage-overlay-c0e73d32a09b0574fe80a51fa7600e29037095c8650082785ef4e8d311c1abf2-merged.mount: Deactivated successfully.
Nov 24 13:20:29 np0005533938 podman[88143]: 2025-11-24 18:20:29.482345493 +0000 UTC m=+0.938859503 container remove 8325d1650e98539ffb6fc427e6b24cbff638ef4620338ca5aab75d67ba07be9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate-test, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 13:20:29 np0005533938 systemd[1]: libpod-conmon-8325d1650e98539ffb6fc427e6b24cbff638ef4620338ca5aab75d67ba07be9c.scope: Deactivated successfully.
Nov 24 13:20:29 np0005533938 systemd[1]: Reloading.
Nov 24 13:20:29 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:20:29 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:20:30 np0005533938 systemd[1]: Reloading.
Nov 24 13:20:30 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:20:30 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:20:30 np0005533938 systemd[1]: Starting Ceph osd.0 for e5ee928f-099b-569b-93c9-ecf025cbb50d...
Nov 24 13:20:30 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:20:30 np0005533938 podman[88321]: 2025-11-24 18:20:30.691170403 +0000 UTC m=+0.046542417 container create 92bc3a0d28fe5bcf7a8430e13c62f9639815502c401f2281f94d1cb029367eef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:20:30 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:30 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/642032cdb9a0d7e7b49e226d85f746a3a6b3fe6c3ba1886f636a251a2d2daab5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:30 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/642032cdb9a0d7e7b49e226d85f746a3a6b3fe6c3ba1886f636a251a2d2daab5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:30 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/642032cdb9a0d7e7b49e226d85f746a3a6b3fe6c3ba1886f636a251a2d2daab5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:30 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/642032cdb9a0d7e7b49e226d85f746a3a6b3fe6c3ba1886f636a251a2d2daab5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:30 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/642032cdb9a0d7e7b49e226d85f746a3a6b3fe6c3ba1886f636a251a2d2daab5/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:30 np0005533938 podman[88321]: 2025-11-24 18:20:30.66968172 +0000 UTC m=+0.025053694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:30 np0005533938 podman[88321]: 2025-11-24 18:20:30.778427526 +0000 UTC m=+0.133799520 container init 92bc3a0d28fe5bcf7a8430e13c62f9639815502c401f2281f94d1cb029367eef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:20:30 np0005533938 podman[88321]: 2025-11-24 18:20:30.789889616 +0000 UTC m=+0.145261630 container start 92bc3a0d28fe5bcf7a8430e13c62f9639815502c401f2281f94d1cb029367eef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 13:20:30 np0005533938 podman[88321]: 2025-11-24 18:20:30.79440412 +0000 UTC m=+0.149776094 container attach 92bc3a0d28fe5bcf7a8430e13c62f9639815502c401f2281f94d1cb029367eef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:20:31 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate[88337]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 24 13:20:31 np0005533938 bash[88321]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 24 13:20:31 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate[88337]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 24 13:20:31 np0005533938 bash[88321]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 24 13:20:31 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate[88337]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 24 13:20:31 np0005533938 bash[88321]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 24 13:20:31 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate[88337]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 24 13:20:31 np0005533938 bash[88321]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 24 13:20:31 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate[88337]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 24 13:20:31 np0005533938 bash[88321]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 24 13:20:31 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate[88337]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 24 13:20:31 np0005533938 bash[88321]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 24 13:20:31 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate[88337]: --> ceph-volume raw activate successful for osd ID: 0
Nov 24 13:20:31 np0005533938 bash[88321]: --> ceph-volume raw activate successful for osd ID: 0
Nov 24 13:20:31 np0005533938 systemd[1]: libpod-92bc3a0d28fe5bcf7a8430e13c62f9639815502c401f2281f94d1cb029367eef.scope: Deactivated successfully.
Nov 24 13:20:31 np0005533938 podman[88321]: 2025-11-24 18:20:31.928625705 +0000 UTC m=+1.283997679 container died 92bc3a0d28fe5bcf7a8430e13c62f9639815502c401f2281f94d1cb029367eef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:20:31 np0005533938 systemd[1]: libpod-92bc3a0d28fe5bcf7a8430e13c62f9639815502c401f2281f94d1cb029367eef.scope: Consumed 1.154s CPU time.
Nov 24 13:20:31 np0005533938 systemd[1]: var-lib-containers-storage-overlay-642032cdb9a0d7e7b49e226d85f746a3a6b3fe6c3ba1886f636a251a2d2daab5-merged.mount: Deactivated successfully.
Nov 24 13:20:31 np0005533938 podman[88321]: 2025-11-24 18:20:31.995178355 +0000 UTC m=+1.350550329 container remove 92bc3a0d28fe5bcf7a8430e13c62f9639815502c401f2281f94d1cb029367eef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 13:20:32 np0005533938 podman[88525]: 2025-11-24 18:20:32.258369083 +0000 UTC m=+0.051245686 container create 9c8b4f7ebd6278ab85f8ff0f61c024387fd070be0b3dda8e6c486672d394dda2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 13:20:32 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56fd52e73cca515bfe6cf463897559755b409a739bc4e2953c8c1596cecbc21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:32 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56fd52e73cca515bfe6cf463897559755b409a739bc4e2953c8c1596cecbc21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:32 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56fd52e73cca515bfe6cf463897559755b409a739bc4e2953c8c1596cecbc21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:32 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56fd52e73cca515bfe6cf463897559755b409a739bc4e2953c8c1596cecbc21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:32 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56fd52e73cca515bfe6cf463897559755b409a739bc4e2953c8c1596cecbc21/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:32 np0005533938 podman[88525]: 2025-11-24 18:20:32.239766143 +0000 UTC m=+0.032642756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:32 np0005533938 podman[88525]: 2025-11-24 18:20:32.34022396 +0000 UTC m=+0.133100573 container init 9c8b4f7ebd6278ab85f8ff0f61c024387fd070be0b3dda8e6c486672d394dda2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 24 13:20:32 np0005533938 podman[88525]: 2025-11-24 18:20:32.350887679 +0000 UTC m=+0.143764312 container start 9c8b4f7ebd6278ab85f8ff0f61c024387fd070be0b3dda8e6c486672d394dda2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 13:20:32 np0005533938 bash[88525]: 9c8b4f7ebd6278ab85f8ff0f61c024387fd070be0b3dda8e6c486672d394dda2
Nov 24 13:20:32 np0005533938 systemd[1]: Started Ceph osd.0 for e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 13:20:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:20:32 np0005533938 ceph-osd[88544]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 13:20:32 np0005533938 ceph-osd[88544]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 24 13:20:32 np0005533938 ceph-osd[88544]: pidfile_write: ignore empty --pid-file
Nov 24 13:20:32 np0005533938 ceph-osd[88544]: bdev(0x55ab2518b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 13:20:32 np0005533938 ceph-osd[88544]: bdev(0x55ab2518b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 13:20:32 np0005533938 ceph-osd[88544]: bdev(0x55ab2518b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 13:20:32 np0005533938 ceph-osd[88544]: bdev(0x55ab2518b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 13:20:32 np0005533938 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 13:20:32 np0005533938 ceph-osd[88544]: bdev(0x55ab25fc3800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 13:20:32 np0005533938 ceph-osd[88544]: bdev(0x55ab25fc3800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 13:20:32 np0005533938 ceph-osd[88544]: bdev(0x55ab25fc3800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 13:20:32 np0005533938 ceph-osd[88544]: bdev(0x55ab25fc3800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 13:20:32 np0005533938 ceph-osd[88544]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 24 13:20:32 np0005533938 ceph-osd[88544]: bdev(0x55ab25fc3800 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 13:20:32 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:20:32 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 24 13:20:32 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 24 13:20:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:20:32 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:20:32 np0005533938 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 24 13:20:32 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 24 13:20:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:20:32 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:20:32 np0005533938 ceph-osd[88544]: bdev(0x55ab2518b800 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 13:20:32 np0005533938 ceph-osd[88544]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 24 13:20:32 np0005533938 ceph-osd[88544]: load: jerasure load: lrc 
Nov 24 13:20:32 np0005533938 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 13:20:32 np0005533938 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 13:20:32 np0005533938 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 13:20:32 np0005533938 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 13:20:32 np0005533938 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 13:20:32 np0005533938 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 13:20:33 np0005533938 podman[88702]: 2025-11-24 18:20:33.067493868 +0000 UTC m=+0.040409082 container create 70b21e5f38ecd82a90e3fe82a786d36c0c0f43afa3f9d4a048321ff6b982b300 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_banzai, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:20:33 np0005533938 systemd[1]: Started libpod-conmon-70b21e5f38ecd82a90e3fe82a786d36c0c0f43afa3f9d4a048321ff6b982b300.scope.
Nov 24 13:20:33 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:33 np0005533938 podman[88702]: 2025-11-24 18:20:33.049127504 +0000 UTC m=+0.022042748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:33 np0005533938 podman[88702]: 2025-11-24 18:20:33.159164903 +0000 UTC m=+0.132080117 container init 70b21e5f38ecd82a90e3fe82a786d36c0c0f43afa3f9d4a048321ff6b982b300 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_banzai, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:20:33 np0005533938 podman[88702]: 2025-11-24 18:20:33.169153625 +0000 UTC m=+0.142068839 container start 70b21e5f38ecd82a90e3fe82a786d36c0c0f43afa3f9d4a048321ff6b982b300 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_banzai, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:20:33 np0005533938 keen_banzai[88718]: 167 167
Nov 24 13:20:33 np0005533938 systemd[1]: libpod-70b21e5f38ecd82a90e3fe82a786d36c0c0f43afa3f9d4a048321ff6b982b300.scope: Deactivated successfully.
Nov 24 13:20:33 np0005533938 podman[88702]: 2025-11-24 18:20:33.176823159 +0000 UTC m=+0.149738393 container attach 70b21e5f38ecd82a90e3fe82a786d36c0c0f43afa3f9d4a048321ff6b982b300 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 13:20:33 np0005533938 podman[88702]: 2025-11-24 18:20:33.177617199 +0000 UTC m=+0.150532413 container died 70b21e5f38ecd82a90e3fe82a786d36c0c0f43afa3f9d4a048321ff6b982b300 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_banzai, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Nov 24 13:20:33 np0005533938 systemd[1]: var-lib-containers-storage-overlay-dc497a1f9620ff4f7672f9b5b0fd88d2078ab24d9ae2c28919cbd94defb8bc53-merged.mount: Deactivated successfully.
Nov 24 13:20:33 np0005533938 podman[88702]: 2025-11-24 18:20:33.218625595 +0000 UTC m=+0.191540819 container remove 70b21e5f38ecd82a90e3fe82a786d36c0c0f43afa3f9d4a048321ff6b982b300 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_banzai, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 13:20:33 np0005533938 systemd[1]: libpod-conmon-70b21e5f38ecd82a90e3fe82a786d36c0c0f43afa3f9d4a048321ff6b982b300.scope: Deactivated successfully.
Nov 24 13:20:33 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:33 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:33 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 24 13:20:33 np0005533938 ceph-mon[74927]: Deploying daemon osd.1 on compute-0
Nov 24 13:20:33 np0005533938 podman[88753]: 2025-11-24 18:20:33.496627056 +0000 UTC m=+0.053937633 container create 3e24f763da7dfee426392b2c59c43d5113108daf4c1c63bf073a71a902c368dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate-test, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bdev(0x55ab26044c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bdev(0x55ab26045400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bdev(0x55ab26045400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bdev(0x55ab26045400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bdev(0x55ab26045400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bluefs mount
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bluefs mount shared_bdev_used = 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: RocksDB version: 7.9.2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Git sha 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: DB SUMMARY
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: DB Session ID:  4HGE9OKKRCKBG2QLOBGS
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: CURRENT file:  CURRENT
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                         Options.error_if_exists: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.create_if_missing: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                                     Options.env: 0x55ab26015d50
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                                Options.info_log: 0x55ab252127e0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                              Options.statistics: (nil)
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.use_fsync: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                              Options.db_log_dir: 
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                                 Options.wal_dir: db.wal
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.write_buffer_manager: 0x55ab2611e460
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.unordered_write: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.row_cache: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                              Options.wal_filter: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.two_write_queues: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.wal_compression: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.atomic_flush: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.max_background_jobs: 4
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.max_background_compactions: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.max_subcompactions: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.max_open_files: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Compression algorithms supported:
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: #011kZSTD supported: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: #011kXpressCompression supported: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: #011kBZip2Compression supported: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: #011kLZ4Compression supported: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: #011kZlibCompression supported: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: #011kSnappyCompression supported: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25212200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab251ff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25212200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab251ff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25212200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab251ff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25212200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab251ff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25212200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab251ff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25212200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab251ff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25212200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab251ff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25212180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab251ff090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25212180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab251ff090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25212180)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab251ff090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 24 13:20:33 np0005533938 systemd[1]: Started libpod-conmon-3e24f763da7dfee426392b2c59c43d5113108daf4c1c63bf073a71a902c368dc.scope.
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c88edec1-a146-434a-86d2-25bed20784f7
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008433535761, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008433536087, "job": 1, "event": "recovery_finished"}
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: freelist init
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: freelist _read_cfg
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bluefs umount
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bdev(0x55ab26045400 /var/lib/ceph/osd/ceph-0/block) close
Nov 24 13:20:33 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:33 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2967fed2a32ba5be3838bf6389ae38c7e34e201481bad1be49150d1392790ce7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:33 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2967fed2a32ba5be3838bf6389ae38c7e34e201481bad1be49150d1392790ce7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:33 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2967fed2a32ba5be3838bf6389ae38c7e34e201481bad1be49150d1392790ce7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:33 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2967fed2a32ba5be3838bf6389ae38c7e34e201481bad1be49150d1392790ce7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:33 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2967fed2a32ba5be3838bf6389ae38c7e34e201481bad1be49150d1392790ce7/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:33 np0005533938 podman[88753]: 2025-11-24 18:20:33.47817527 +0000 UTC m=+0.035485867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:33 np0005533938 podman[88753]: 2025-11-24 18:20:33.631245986 +0000 UTC m=+0.188556583 container init 3e24f763da7dfee426392b2c59c43d5113108daf4c1c63bf073a71a902c368dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate-test, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:20:33 np0005533938 podman[88753]: 2025-11-24 18:20:33.637829162 +0000 UTC m=+0.195139739 container start 3e24f763da7dfee426392b2c59c43d5113108daf4c1c63bf073a71a902c368dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:20:33 np0005533938 podman[88753]: 2025-11-24 18:20:33.644034099 +0000 UTC m=+0.201344676 container attach 3e24f763da7dfee426392b2c59c43d5113108daf4c1c63bf073a71a902c368dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bdev(0x55ab26045400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bdev(0x55ab26045400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bdev(0x55ab26045400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bdev(0x55ab26045400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bluefs mount
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bluefs mount shared_bdev_used = 4718592
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: RocksDB version: 7.9.2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Git sha 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: DB SUMMARY
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: DB Session ID:  4HGE9OKKRCKBG2QLOBGT
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: CURRENT file:  CURRENT
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                         Options.error_if_exists: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.create_if_missing: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                                     Options.env: 0x55ab261ae1c0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                                Options.info_log: 0x55ab26011700
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                              Options.statistics: (nil)
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.use_fsync: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                              Options.db_log_dir: 
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                                 Options.wal_dir: db.wal
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.write_buffer_manager: 0x55ab2611e6e0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.unordered_write: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.row_cache: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                              Options.wal_filter: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.two_write_queues: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.wal_compression: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.atomic_flush: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.max_background_jobs: 4
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.max_background_compactions: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.max_subcompactions: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.max_open_files: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Compression algorithms supported:
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: #011kZSTD supported: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: #011kXpressCompression supported: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: #011kBZip2Compression supported: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: #011kLZ4Compression supported: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: #011kZlibCompression supported: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: #011kSnappyCompression supported: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25208f80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab251ff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25208f80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab251ff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25208f80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab251ff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25208f80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab251ff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25208f80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab251ff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25208f80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab251ff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25208f80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab251ff1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25208fe0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab251ff090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25208fe0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab251ff090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab25208fe0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ab251ff090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c88edec1-a146-434a-86d2-25bed20784f7
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008433816123, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008433823065, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008433, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c88edec1-a146-434a-86d2-25bed20784f7", "db_session_id": "4HGE9OKKRCKBG2QLOBGT", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008433835596, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 467, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008433, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c88edec1-a146-434a-86d2-25bed20784f7", "db_session_id": "4HGE9OKKRCKBG2QLOBGT", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008433852301, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008433, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c88edec1-a146-434a-86d2-25bed20784f7", "db_session_id": "4HGE9OKKRCKBG2QLOBGT", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008433855218, "job": 1, "event": "recovery_finished"}
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55ab2536dc00
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: DB pointer 0x55ab26107a00
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.2 total, 0.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 460.80 MB usag
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: _get_class not permitted to load lua
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: _get_class not permitted to load sdk
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: _get_class not permitted to load test_remote_reads
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: osd.0 0 load_pgs
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: osd.0 0 load_pgs opened 0 pgs
Nov 24 13:20:33 np0005533938 ceph-osd[88544]: osd.0 0 log_to_monitors true
Nov 24 13:20:33 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0[88540]: 2025-11-24T18:20:33.952+0000 7fc52ebf4740 -1 osd.0 0 log_to_monitors true
Nov 24 13:20:33 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Nov 24 13:20:33 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3004045453,v1:192.168.122.100:6803/3004045453]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 24 13:20:34 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate-test[88964]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 24 13:20:34 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate-test[88964]:                            [--no-systemd] [--no-tmpfs]
Nov 24 13:20:34 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate-test[88964]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 24 13:20:34 np0005533938 systemd[1]: libpod-3e24f763da7dfee426392b2c59c43d5113108daf4c1c63bf073a71a902c368dc.scope: Deactivated successfully.
Nov 24 13:20:34 np0005533938 podman[88753]: 2025-11-24 18:20:34.264396197 +0000 UTC m=+0.821706774 container died 3e24f763da7dfee426392b2c59c43d5113108daf4c1c63bf073a71a902c368dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate-test, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:20:34 np0005533938 systemd[1]: var-lib-containers-storage-overlay-2967fed2a32ba5be3838bf6389ae38c7e34e201481bad1be49150d1392790ce7-merged.mount: Deactivated successfully.
Nov 24 13:20:34 np0005533938 podman[88753]: 2025-11-24 18:20:34.426758687 +0000 UTC m=+0.984069304 container remove 3e24f763da7dfee426392b2c59c43d5113108daf4c1c63bf073a71a902c368dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate-test, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 13:20:34 np0005533938 systemd[1]: libpod-conmon-3e24f763da7dfee426392b2c59c43d5113108daf4c1c63bf073a71a902c368dc.scope: Deactivated successfully.
Nov 24 13:20:34 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:20:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 24 13:20:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 13:20:34 np0005533938 ceph-mon[74927]: from='osd.0 [v2:192.168.122.100:6802/3004045453,v1:192.168.122.100:6803/3004045453]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 24 13:20:34 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3004045453,v1:192.168.122.100:6803/3004045453]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 24 13:20:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Nov 24 13:20:34 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Nov 24 13:20:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 24 13:20:34 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3004045453,v1:192.168.122.100:6803/3004045453]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 24 13:20:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 24 13:20:34 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 13:20:34 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 13:20:34 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 13:20:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 13:20:34 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 13:20:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 13:20:34 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 13:20:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 13:20:34 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 13:20:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:20:34
Nov 24 13:20:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:20:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:20:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] No pools available
Nov 24 13:20:34 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:20:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:20:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:20:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:20:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:20:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:20:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:20:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:20:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:20:34 np0005533938 systemd[1]: Reloading.
Nov 24 13:20:34 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 24 13:20:34 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 24 13:20:34 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:20:34 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:20:35 np0005533938 systemd[1]: Reloading.
Nov 24 13:20:35 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:20:35 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:20:35 np0005533938 systemd[1]: Starting Ceph osd.1 for e5ee928f-099b-569b-93c9-ecf025cbb50d...
Nov 24 13:20:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 24 13:20:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 13:20:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3004045453,v1:192.168.122.100:6803/3004045453]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 24 13:20:35 np0005533938 ceph-osd[88544]: osd.0 0 done with init, starting boot process
Nov 24 13:20:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Nov 24 13:20:35 np0005533938 ceph-osd[88544]: osd.0 0 start_boot
Nov 24 13:20:35 np0005533938 ceph-osd[88544]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 24 13:20:35 np0005533938 ceph-osd[88544]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 24 13:20:35 np0005533938 ceph-osd[88544]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 24 13:20:35 np0005533938 ceph-osd[88544]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 24 13:20:35 np0005533938 ceph-osd[88544]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 24 13:20:35 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Nov 24 13:20:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 13:20:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 13:20:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 13:20:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 13:20:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 13:20:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 13:20:35 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 13:20:35 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 13:20:35 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 13:20:35 np0005533938 ceph-mon[74927]: from='osd.0 [v2:192.168.122.100:6802/3004045453,v1:192.168.122.100:6803/3004045453]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 24 13:20:35 np0005533938 ceph-mon[74927]: from='osd.0 [v2:192.168.122.100:6802/3004045453,v1:192.168.122.100:6803/3004045453]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 24 13:20:35 np0005533938 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3004045453; not ready for session (expect reconnect)
Nov 24 13:20:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 13:20:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 13:20:35 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 13:20:35 np0005533938 podman[89342]: 2025-11-24 18:20:35.664538377 +0000 UTC m=+0.054204910 container create aa7a00067d0094daa4df9709a8fae2182634f4f4bd1561cc83217b3cda347f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:20:35 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:35 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3a2b244f8723007f4882ddc7c135909d801d3c3276ef49ed2f7d8d198bd69b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:35 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3a2b244f8723007f4882ddc7c135909d801d3c3276ef49ed2f7d8d198bd69b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:35 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3a2b244f8723007f4882ddc7c135909d801d3c3276ef49ed2f7d8d198bd69b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:35 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3a2b244f8723007f4882ddc7c135909d801d3c3276ef49ed2f7d8d198bd69b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:35 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff3a2b244f8723007f4882ddc7c135909d801d3c3276ef49ed2f7d8d198bd69b/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:35 np0005533938 podman[89342]: 2025-11-24 18:20:35.639170677 +0000 UTC m=+0.028837240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:35 np0005533938 podman[89342]: 2025-11-24 18:20:35.816138526 +0000 UTC m=+0.205805089 container init aa7a00067d0094daa4df9709a8fae2182634f4f4bd1561cc83217b3cda347f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 24 13:20:35 np0005533938 podman[89342]: 2025-11-24 18:20:35.822478186 +0000 UTC m=+0.212144729 container start aa7a00067d0094daa4df9709a8fae2182634f4f4bd1561cc83217b3cda347f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:20:35 np0005533938 podman[89342]: 2025-11-24 18:20:35.856936416 +0000 UTC m=+0.246602969 container attach aa7a00067d0094daa4df9709a8fae2182634f4f4bd1561cc83217b3cda347f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:20:36 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:20:36 np0005533938 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3004045453; not ready for session (expect reconnect)
Nov 24 13:20:36 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 13:20:36 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 13:20:36 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 13:20:36 np0005533938 ceph-mon[74927]: from='osd.0 [v2:192.168.122.100:6802/3004045453,v1:192.168.122.100:6803/3004045453]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 24 13:20:36 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate[89357]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 13:20:36 np0005533938 bash[89342]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 13:20:36 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate[89357]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 24 13:20:36 np0005533938 bash[89342]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 24 13:20:36 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate[89357]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 24 13:20:36 np0005533938 bash[89342]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 24 13:20:36 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate[89357]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 24 13:20:36 np0005533938 bash[89342]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 24 13:20:36 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate[89357]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 24 13:20:36 np0005533938 bash[89342]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 24 13:20:36 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate[89357]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 13:20:36 np0005533938 bash[89342]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 24 13:20:36 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate[89357]: --> ceph-volume raw activate successful for osd ID: 1
Nov 24 13:20:36 np0005533938 bash[89342]: --> ceph-volume raw activate successful for osd ID: 1
Nov 24 13:20:36 np0005533938 systemd[1]: libpod-aa7a00067d0094daa4df9709a8fae2182634f4f4bd1561cc83217b3cda347f0a.scope: Deactivated successfully.
Nov 24 13:20:36 np0005533938 systemd[1]: libpod-aa7a00067d0094daa4df9709a8fae2182634f4f4bd1561cc83217b3cda347f0a.scope: Consumed 1.101s CPU time.
Nov 24 13:20:36 np0005533938 conmon[89357]: conmon aa7a00067d0094daa4df <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa7a00067d0094daa4df9709a8fae2182634f4f4bd1561cc83217b3cda347f0a.scope/container/memory.events
Nov 24 13:20:36 np0005533938 podman[89342]: 2025-11-24 18:20:36.925536695 +0000 UTC m=+1.315203218 container died aa7a00067d0094daa4df9709a8fae2182634f4f4bd1561cc83217b3cda347f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:20:37 np0005533938 systemd[1]: var-lib-containers-storage-overlay-ff3a2b244f8723007f4882ddc7c135909d801d3c3276ef49ed2f7d8d198bd69b-merged.mount: Deactivated successfully.
Nov 24 13:20:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:20:37 np0005533938 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3004045453; not ready for session (expect reconnect)
Nov 24 13:20:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 13:20:37 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 13:20:37 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 13:20:37 np0005533938 podman[89342]: 2025-11-24 18:20:37.606728769 +0000 UTC m=+1.996395332 container remove aa7a00067d0094daa4df9709a8fae2182634f4f4bd1561cc83217b3cda347f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1-activate, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:20:37 np0005533938 podman[89537]: 2025-11-24 18:20:37.903654838 +0000 UTC m=+0.099838012 container create edbd9c794ff6da0dfcdb98ed6aaaf0ca5ebc8143fc908cf4f59185fafabf5dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:20:37 np0005533938 podman[89537]: 2025-11-24 18:20:37.839610481 +0000 UTC m=+0.035793665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9091d7b35ccc3e5ef83d0c61ea1a3f7f6bcce05a3a37047686ce2a2e7105ae38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9091d7b35ccc3e5ef83d0c61ea1a3f7f6bcce05a3a37047686ce2a2e7105ae38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9091d7b35ccc3e5ef83d0c61ea1a3f7f6bcce05a3a37047686ce2a2e7105ae38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9091d7b35ccc3e5ef83d0c61ea1a3f7f6bcce05a3a37047686ce2a2e7105ae38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9091d7b35ccc3e5ef83d0c61ea1a3f7f6bcce05a3a37047686ce2a2e7105ae38/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:38 np0005533938 podman[89537]: 2025-11-24 18:20:38.181291839 +0000 UTC m=+0.377475063 container init edbd9c794ff6da0dfcdb98ed6aaaf0ca5ebc8143fc908cf4f59185fafabf5dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:20:38 np0005533938 podman[89537]: 2025-11-24 18:20:38.194940514 +0000 UTC m=+0.391123688 container start edbd9c794ff6da0dfcdb98ed6aaaf0ca5ebc8143fc908cf4f59185fafabf5dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: pidfile_write: ignore empty --pid-file
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: bdev(0x560b4055b800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: bdev(0x560b4055b800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: bdev(0x560b4055b800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: bdev(0x560b4055b800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: bdev(0x560b41393800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: bdev(0x560b41393800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: bdev(0x560b41393800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: bdev(0x560b41393800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: bdev(0x560b41393800 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: bdev(0x560b4055b800 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 13:20:38 np0005533938 bash[89537]: edbd9c794ff6da0dfcdb98ed6aaaf0ca5ebc8143fc908cf4f59185fafabf5dd8
Nov 24 13:20:38 np0005533938 systemd[1]: Started Ceph osd.1 for e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 13:20:38 np0005533938 python3[89582]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:20:38 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: load: jerasure load: lrc 
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 13:20:38 np0005533938 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3004045453; not ready for session (expect reconnect)
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 13:20:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 13:20:38 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 13:20:38 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 13:20:38 np0005533938 podman[89598]: 2025-11-24 18:20:38.50404085 +0000 UTC m=+0.035638421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:20:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:20:38 np0005533938 podman[89598]: 2025-11-24 18:20:38.738994984 +0000 UTC m=+0.270592495 container create 57a4ec467aec38b579ca04ca1b25e29b8c22f3f8f1d4ae950daea2e89aece526 (image=quay.io/ceph/ceph:v18, name=adoring_jackson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 13:20:38 np0005533938 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 13:20:38 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:38 np0005533938 systemd[1]: Started libpod-conmon-57a4ec467aec38b579ca04ca1b25e29b8c22f3f8f1d4ae950daea2e89aece526.scope.
Nov 24 13:20:39 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:39 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65c08a2c6a8cddee7844bd1a3d314cfee7dcb63eca1f6886140b1c5a4331fb4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:39 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65c08a2c6a8cddee7844bd1a3d314cfee7dcb63eca1f6886140b1c5a4331fb4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:39 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65c08a2c6a8cddee7844bd1a3d314cfee7dcb63eca1f6886140b1c5a4331fb4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bdev(0x560b41414c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bdev(0x560b41415400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bdev(0x560b41415400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bdev(0x560b41415400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bdev(0x560b41415400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bluefs mount
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bluefs mount shared_bdev_used = 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: RocksDB version: 7.9.2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Git sha 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: DB SUMMARY
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: DB Session ID:  M68LIBJHY0K5KHYYLOTW
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: CURRENT file:  CURRENT
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                         Options.error_if_exists: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.create_if_missing: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                                     Options.env: 0x560b413e5c70
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                                Options.info_log: 0x560b405e28a0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                              Options.statistics: (nil)
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.use_fsync: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                              Options.db_log_dir: 
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                                 Options.wal_dir: db.wal
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.write_buffer_manager: 0x560b414ee460
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 13:20:39 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.unordered_write: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.row_cache: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                              Options.wal_filter: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.two_write_queues: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.wal_compression: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.atomic_flush: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.max_background_jobs: 4
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.max_background_compactions: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.max_subcompactions: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.max_open_files: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Compression algorithms supported:
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: #011kZSTD supported: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: #011kXpressCompression supported: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: #011kBZip2Compression supported: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: #011kLZ4Compression supported: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: #011kZlibCompression supported: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: #011kSnappyCompression supported: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e22c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560b405cf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e22c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560b405cf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e22c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560b405cf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e22c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560b405cf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e22c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560b405cf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e22c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560b405cf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e22c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560b405cf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560b405cf090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560b405cf090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560b405cf090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 51802cb1-f710-439e-8cb3-c13c7c81f345
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008439106949, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008439107157, "job": 1, "event": "recovery_finished"}
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: freelist init
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: freelist _read_cfg
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bluefs umount
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bdev(0x560b41415400 /var/lib/ceph/osd/ceph-1/block) close
Nov 24 13:20:39 np0005533938 podman[89598]: 2025-11-24 18:20:39.252273207 +0000 UTC m=+0.783870728 container init 57a4ec467aec38b579ca04ca1b25e29b8c22f3f8f1d4ae950daea2e89aece526 (image=quay.io/ceph/ceph:v18, name=adoring_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 13:20:39 np0005533938 podman[89598]: 2025-11-24 18:20:39.262755672 +0000 UTC m=+0.794353173 container start 57a4ec467aec38b579ca04ca1b25e29b8c22f3f8f1d4ae950daea2e89aece526 (image=quay.io/ceph/ceph:v18, name=adoring_jackson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bdev(0x560b41415400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bdev(0x560b41415400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bdev(0x560b41415400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bdev(0x560b41415400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bluefs mount
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bluefs mount shared_bdev_used = 4718592
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: RocksDB version: 7.9.2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Git sha 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: DB SUMMARY
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: DB Session ID:  M68LIBJHY0K5KHYYLOTX
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: CURRENT file:  CURRENT
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                         Options.error_if_exists: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.create_if_missing: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                                     Options.env: 0x560b41596b60
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                                Options.info_log: 0x560b405e2600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                              Options.statistics: (nil)
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.use_fsync: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                              Options.db_log_dir: 
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                                 Options.wal_dir: db.wal
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.write_buffer_manager: 0x560b414ee460
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.unordered_write: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.row_cache: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                              Options.wal_filter: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.two_write_queues: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.wal_compression: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.atomic_flush: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.max_background_jobs: 4
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.max_background_compactions: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.max_subcompactions: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.max_open_files: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Compression algorithms supported:
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: #011kZSTD supported: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: #011kXpressCompression supported: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: #011kBZip2Compression supported: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: #011kLZ4Compression supported: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: #011kZlibCompression supported: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: #011kSnappyCompression supported: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560b405cf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560b405cf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560b405cf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560b405cf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560b405cf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560b405cf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2a20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560b405cf1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560b405cf090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560b405cf090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b405e2380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560b405cf090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 51802cb1-f710-439e-8cb3-c13c7c81f345
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008439376742, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 24 13:20:39 np0005533938 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3004045453; not ready for session (expect reconnect)
Nov 24 13:20:39 np0005533938 podman[89598]: 2025-11-24 18:20:39.563030486 +0000 UTC m=+1.094627977 container attach 57a4ec467aec38b579ca04ca1b25e29b8c22f3f8f1d4ae950daea2e89aece526 (image=quay.io/ceph/ceph:v18, name=adoring_jackson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 13:20:39 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 13:20:39 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 13:20:39 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008439741429, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008439, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "51802cb1-f710-439e-8cb3-c13c7c81f345", "db_session_id": "M68LIBJHY0K5KHYYLOTX", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:20:39 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008439746577, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008439, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "51802cb1-f710-439e-8cb3-c13c7c81f345", "db_session_id": "M68LIBJHY0K5KHYYLOTX", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:20:39 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Nov 24 13:20:39 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 24 13:20:39 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:20:39 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:20:39 np0005533938 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Nov 24 13:20:39 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008439801336, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008439, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "51802cb1-f710-439e-8cb3-c13c7c81f345", "db_session_id": "M68LIBJHY0K5KHYYLOTX", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:20:39 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 24 13:20:39 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2656158072' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 13:20:39 np0005533938 adoring_jackson[89624]: 
Nov 24 13:20:39 np0005533938 adoring_jackson[89624]: {"fsid":"e5ee928f-099b-569b-93c9-ecf025cbb50d","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":112,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":8,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1764008421,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-24T18:20:36.466398+0000","services":{}},"progress_events":{}}
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008439896810, "job": 1, "event": "recovery_finished"}
Nov 24 13:20:39 np0005533938 ceph-osd[89581]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 24 13:20:39 np0005533938 systemd[1]: libpod-57a4ec467aec38b579ca04ca1b25e29b8c22f3f8f1d4ae950daea2e89aece526.scope: Deactivated successfully.
Nov 24 13:20:39 np0005533938 podman[89598]: 2025-11-24 18:20:39.915331194 +0000 UTC m=+1.446928685 container died 57a4ec467aec38b579ca04ca1b25e29b8c22f3f8f1d4ae950daea2e89aece526 (image=quay.io/ceph/ceph:v18, name=adoring_jackson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:20:40 np0005533938 systemd[1]: var-lib-containers-storage-overlay-a65c08a2c6a8cddee7844bd1a3d314cfee7dcb63eca1f6886140b1c5a4331fb4-merged.mount: Deactivated successfully.
Nov 24 13:20:40 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:40 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:40 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 24 13:20:40 np0005533938 ceph-mon[74927]: Deploying daemon osd.2 on compute-0
Nov 24 13:20:40 np0005533938 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560b4073c000
Nov 24 13:20:40 np0005533938 ceph-osd[89581]: rocksdb: DB pointer 0x560b414d7a00
Nov 24 13:20:40 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 24 13:20:40 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 24 13:20:40 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 24 13:20:40 np0005533938 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:20:40 np0005533938 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1.0 total, 1.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.0 total, 1.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.0 total, 1.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.0 total, 1.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 460.80 MB usag
Nov 24 13:20:40 np0005533938 ceph-osd[89581]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 24 13:20:40 np0005533938 ceph-osd[89581]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 24 13:20:40 np0005533938 ceph-osd[89581]: _get_class not permitted to load lua
Nov 24 13:20:40 np0005533938 ceph-osd[89581]: _get_class not permitted to load sdk
Nov 24 13:20:40 np0005533938 ceph-osd[89581]: _get_class not permitted to load test_remote_reads
Nov 24 13:20:40 np0005533938 ceph-osd[89581]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 24 13:20:40 np0005533938 ceph-osd[89581]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 24 13:20:40 np0005533938 ceph-osd[89581]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 24 13:20:40 np0005533938 ceph-osd[89581]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 24 13:20:40 np0005533938 ceph-osd[89581]: osd.1 0 load_pgs
Nov 24 13:20:40 np0005533938 ceph-osd[89581]: osd.1 0 load_pgs opened 0 pgs
Nov 24 13:20:40 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1[89552]: 2025-11-24T18:20:40.406+0000 7f4b025f9740 -1 osd.1 0 log_to_monitors true
Nov 24 13:20:40 np0005533938 ceph-osd[89581]: osd.1 0 log_to_monitors true
Nov 24 13:20:40 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Nov 24 13:20:40 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1623794735,v1:192.168.122.100:6807/1623794735]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 24 13:20:40 np0005533938 podman[89598]: 2025-11-24 18:20:40.429092969 +0000 UTC m=+1.960690460 container remove 57a4ec467aec38b579ca04ca1b25e29b8c22f3f8f1d4ae950daea2e89aece526 (image=quay.io/ceph/ceph:v18, name=adoring_jackson, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:20:40 np0005533938 systemd[1]: libpod-conmon-57a4ec467aec38b579ca04ca1b25e29b8c22f3f8f1d4ae950daea2e89aece526.scope: Deactivated successfully.
Nov 24 13:20:40 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:20:40 np0005533938 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3004045453; not ready for session (expect reconnect)
Nov 24 13:20:40 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 13:20:40 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 13:20:40 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 13:20:40 np0005533938 podman[90212]: 2025-11-24 18:20:40.551803939 +0000 UTC m=+0.021887834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:40 np0005533938 podman[90212]: 2025-11-24 18:20:40.675572634 +0000 UTC m=+0.145656529 container create 9daea98b02596dbf5817f2b66743d62ab8d4117df2fea0a60da9b1614ae4bd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_robinson, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:20:40 np0005533938 systemd[1]: Started libpod-conmon-9daea98b02596dbf5817f2b66743d62ab8d4117df2fea0a60da9b1614ae4bd23.scope.
Nov 24 13:20:40 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:40 np0005533938 podman[90212]: 2025-11-24 18:20:40.802151461 +0000 UTC m=+0.272235366 container init 9daea98b02596dbf5817f2b66743d62ab8d4117df2fea0a60da9b1614ae4bd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:20:40 np0005533938 podman[90212]: 2025-11-24 18:20:40.807926907 +0000 UTC m=+0.278010792 container start 9daea98b02596dbf5817f2b66743d62ab8d4117df2fea0a60da9b1614ae4bd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_robinson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 13:20:40 np0005533938 nervous_robinson[90229]: 167 167
Nov 24 13:20:40 np0005533938 systemd[1]: libpod-9daea98b02596dbf5817f2b66743d62ab8d4117df2fea0a60da9b1614ae4bd23.scope: Deactivated successfully.
Nov 24 13:20:40 np0005533938 podman[90212]: 2025-11-24 18:20:40.828317692 +0000 UTC m=+0.298401597 container attach 9daea98b02596dbf5817f2b66743d62ab8d4117df2fea0a60da9b1614ae4bd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_robinson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 13:20:40 np0005533938 podman[90212]: 2025-11-24 18:20:40.828751383 +0000 UTC m=+0.298835268 container died 9daea98b02596dbf5817f2b66743d62ab8d4117df2fea0a60da9b1614ae4bd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 13:20:40 np0005533938 systemd[1]: var-lib-containers-storage-overlay-180ff14d8594bd53788712a3cbe6db9c9713cc8d3ad0406e2595591fb8286187-merged.mount: Deactivated successfully.
Nov 24 13:20:40 np0005533938 podman[90212]: 2025-11-24 18:20:40.931146209 +0000 UTC m=+0.401230134 container remove 9daea98b02596dbf5817f2b66743d62ab8d4117df2fea0a60da9b1614ae4bd23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 13:20:40 np0005533938 systemd[1]: libpod-conmon-9daea98b02596dbf5817f2b66743d62ab8d4117df2fea0a60da9b1614ae4bd23.scope: Deactivated successfully.
Nov 24 13:20:41 np0005533938 podman[90261]: 2025-11-24 18:20:41.232286465 +0000 UTC m=+0.068049500 container create 82a9c183b3f62395376c91cb8bcfd9c9da9a63532b2fe4f4f18edb8c7afdd6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:20:41 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 24 13:20:41 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 13:20:41 np0005533938 podman[90261]: 2025-11-24 18:20:41.196563763 +0000 UTC m=+0.032326898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:41 np0005533938 ceph-mon[74927]: from='osd.1 [v2:192.168.122.100:6806/1623794735,v1:192.168.122.100:6807/1623794735]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 24 13:20:41 np0005533938 systemd[1]: Started libpod-conmon-82a9c183b3f62395376c91cb8bcfd9c9da9a63532b2fe4f4f18edb8c7afdd6af.scope.
Nov 24 13:20:41 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:41 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93211d7b969363c0b9a6df6d83ba6ff6ded9c3f4c2cbd87cc521a9a5adb61ad3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:41 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1623794735,v1:192.168.122.100:6807/1623794735]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 24 13:20:41 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Nov 24 13:20:41 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93211d7b969363c0b9a6df6d83ba6ff6ded9c3f4c2cbd87cc521a9a5adb61ad3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:41 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93211d7b969363c0b9a6df6d83ba6ff6ded9c3f4c2cbd87cc521a9a5adb61ad3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:41 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93211d7b969363c0b9a6df6d83ba6ff6ded9c3f4c2cbd87cc521a9a5adb61ad3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:41 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93211d7b969363c0b9a6df6d83ba6ff6ded9c3f4c2cbd87cc521a9a5adb61ad3/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:41 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Nov 24 13:20:41 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 13:20:41 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 13:20:41 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 24 13:20:41 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1623794735,v1:192.168.122.100:6807/1623794735]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 24 13:20:41 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 24 13:20:41 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 13:20:41 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 13:20:41 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 13:20:41 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 13:20:41 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 13:20:41 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 13:20:41 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 13:20:41 np0005533938 podman[90261]: 2025-11-24 18:20:41.355256401 +0000 UTC m=+0.191019446 container init 82a9c183b3f62395376c91cb8bcfd9c9da9a63532b2fe4f4f18edb8c7afdd6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate-test, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:20:41 np0005533938 podman[90261]: 2025-11-24 18:20:41.363054128 +0000 UTC m=+0.198817173 container start 82a9c183b3f62395376c91cb8bcfd9c9da9a63532b2fe4f4f18edb8c7afdd6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 13:20:41 np0005533938 podman[90261]: 2025-11-24 18:20:41.374661411 +0000 UTC m=+0.210424446 container attach 82a9c183b3f62395376c91cb8bcfd9c9da9a63532b2fe4f4f18edb8c7afdd6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 13:20:41 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 24 13:20:41 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 24 13:20:41 np0005533938 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3004045453; not ready for session (expect reconnect)
Nov 24 13:20:41 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 13:20:41 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 13:20:41 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 13:20:42 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate-test[90276]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 24 13:20:42 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate-test[90276]:                            [--no-systemd] [--no-tmpfs]
Nov 24 13:20:42 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate-test[90276]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 24 13:20:42 np0005533938 systemd[1]: libpod-82a9c183b3f62395376c91cb8bcfd9c9da9a63532b2fe4f4f18edb8c7afdd6af.scope: Deactivated successfully.
Nov 24 13:20:42 np0005533938 podman[90261]: 2025-11-24 18:20:42.055871954 +0000 UTC m=+0.891635029 container died 82a9c183b3f62395376c91cb8bcfd9c9da9a63532b2fe4f4f18edb8c7afdd6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:20:42 np0005533938 systemd[1]: var-lib-containers-storage-overlay-93211d7b969363c0b9a6df6d83ba6ff6ded9c3f4c2cbd87cc521a9a5adb61ad3-merged.mount: Deactivated successfully.
Nov 24 13:20:42 np0005533938 podman[90261]: 2025-11-24 18:20:42.137518576 +0000 UTC m=+0.973281611 container remove 82a9c183b3f62395376c91cb8bcfd9c9da9a63532b2fe4f4f18edb8c7afdd6af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 13:20:42 np0005533938 systemd[1]: libpod-conmon-82a9c183b3f62395376c91cb8bcfd9c9da9a63532b2fe4f4f18edb8c7afdd6af.scope: Deactivated successfully.
Nov 24 13:20:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 24 13:20:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 13:20:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1623794735,v1:192.168.122.100:6807/1623794735]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 24 13:20:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e10 e10: 3 total, 0 up, 3 in
Nov 24 13:20:42 np0005533938 ceph-osd[89581]: osd.1 0 done with init, starting boot process
Nov 24 13:20:42 np0005533938 ceph-osd[89581]: osd.1 0 start_boot
Nov 24 13:20:42 np0005533938 ceph-osd[89581]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 24 13:20:42 np0005533938 ceph-osd[89581]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 24 13:20:42 np0005533938 ceph-osd[89581]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 24 13:20:42 np0005533938 ceph-osd[89581]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 24 13:20:42 np0005533938 ceph-osd[89581]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 24 13:20:42 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 0 up, 3 in
Nov 24 13:20:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 13:20:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 13:20:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 13:20:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 13:20:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 13:20:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 13:20:42 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 13:20:42 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 13:20:42 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 13:20:42 np0005533938 ceph-mon[74927]: from='osd.1 [v2:192.168.122.100:6806/1623794735,v1:192.168.122.100:6807/1623794735]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 24 13:20:42 np0005533938 ceph-mon[74927]: from='osd.1 [v2:192.168.122.100:6806/1623794735,v1:192.168.122.100:6807/1623794735]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 24 13:20:42 np0005533938 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1623794735; not ready for session (expect reconnect)
Nov 24 13:20:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 13:20:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 13:20:42 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 13:20:42 np0005533938 systemd[1]: Reloading.
Nov 24 13:20:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:20:42 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 24 13:20:42 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:20:42 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:20:42 np0005533938 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3004045453; not ready for session (expect reconnect)
Nov 24 13:20:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 13:20:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 13:20:42 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 24 13:20:42 np0005533938 ceph-osd[88544]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 22.320 iops: 5713.855 elapsed_sec: 0.525
Nov 24 13:20:42 np0005533938 ceph-osd[88544]: log_channel(cluster) log [WRN] : OSD bench result of 5713.854944 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 13:20:42 np0005533938 ceph-osd[88544]: osd.0 0 waiting for initial osdmap
Nov 24 13:20:42 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0[88540]: 2025-11-24T18:20:42.693+0000 7fc52ab74640 -1 osd.0 0 waiting for initial osdmap
Nov 24 13:20:42 np0005533938 ceph-osd[88544]: osd.0 10 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 24 13:20:42 np0005533938 ceph-osd[88544]: osd.0 10 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 24 13:20:42 np0005533938 ceph-osd[88544]: osd.0 10 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 24 13:20:42 np0005533938 ceph-osd[88544]: osd.0 10 check_osdmap_features require_osd_release unknown -> reef
Nov 24 13:20:42 np0005533938 ceph-osd[88544]: osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 24 13:20:42 np0005533938 ceph-osd[88544]: osd.0 10 set_numa_affinity not setting numa affinity
Nov 24 13:20:42 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-0[88540]: 2025-11-24T18:20:42.730+0000 7fc52619c640 -1 osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 24 13:20:42 np0005533938 ceph-osd[88544]: osd.0 10 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 24 13:20:42 np0005533938 systemd[1]: Reloading.
Nov 24 13:20:42 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:20:42 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:20:43 np0005533938 systemd[1]: Starting Ceph osd.2 for e5ee928f-099b-569b-93c9-ecf025cbb50d...
Nov 24 13:20:43 np0005533938 podman[90438]: 2025-11-24 18:20:43.301894704 +0000 UTC m=+0.052324243 container create 9077f4fb005d5e613a94a4facab79446737f61280eb2394ea1d5afe9fc1e924a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 13:20:43 np0005533938 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1623794735; not ready for session (expect reconnect)
Nov 24 13:20:43 np0005533938 podman[90438]: 2025-11-24 18:20:43.270126591 +0000 UTC m=+0.020556180 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 24 13:20:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 13:20:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 13:20:43 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 13:20:43 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 13:20:43 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:43 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3978808a570ed4f80861a1985084b6e1ae4a8e02a9119bcb3fd3f356753bf3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:43 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3978808a570ed4f80861a1985084b6e1ae4a8e02a9119bcb3fd3f356753bf3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:43 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3978808a570ed4f80861a1985084b6e1ae4a8e02a9119bcb3fd3f356753bf3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:43 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3978808a570ed4f80861a1985084b6e1ae4a8e02a9119bcb3fd3f356753bf3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:43 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3978808a570ed4f80861a1985084b6e1ae4a8e02a9119bcb3fd3f356753bf3/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Nov 24 13:20:43 np0005533938 podman[90438]: 2025-11-24 18:20:43.460497859 +0000 UTC m=+0.210927398 container init 9077f4fb005d5e613a94a4facab79446737f61280eb2394ea1d5afe9fc1e924a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 13:20:43 np0005533938 ceph-mon[74927]: from='osd.1 [v2:192.168.122.100:6806/1623794735,v1:192.168.122.100:6807/1623794735]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 24 13:20:43 np0005533938 ceph-mon[74927]: OSD bench result of 5713.854944 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 13:20:43 np0005533938 podman[90438]: 2025-11-24 18:20:43.467116937 +0000 UTC m=+0.217546466 container start 9077f4fb005d5e613a94a4facab79446737f61280eb2394ea1d5afe9fc1e924a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:20:43 np0005533938 ceph-osd[88544]: osd.0 11 state: booting -> active
Nov 24 13:20:43 np0005533938 ceph-mon[74927]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3004045453,v1:192.168.122.100:6803/3004045453] boot
Nov 24 13:20:43 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Nov 24 13:20:43 np0005533938 podman[90438]: 2025-11-24 18:20:43.508450521 +0000 UTC m=+0.258880090 container attach 9077f4fb005d5e613a94a4facab79446737f61280eb2394ea1d5afe9fc1e924a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 13:20:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 24 13:20:43 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 24 13:20:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 13:20:43 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 13:20:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 13:20:43 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 13:20:43 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 13:20:43 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 13:20:44 np0005533938 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1623794735; not ready for session (expect reconnect)
Nov 24 13:20:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 13:20:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 13:20:44 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 13:20:44 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 24 13:20:44 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate[90453]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 24 13:20:44 np0005533938 bash[90438]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 24 13:20:44 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate[90453]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 24 13:20:44 np0005533938 bash[90438]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 24 13:20:44 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate[90453]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 24 13:20:44 np0005533938 bash[90438]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 24 13:20:44 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate[90453]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 24 13:20:44 np0005533938 bash[90438]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 24 13:20:44 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate[90453]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 24 13:20:44 np0005533938 bash[90438]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 24 13:20:44 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate[90453]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 24 13:20:44 np0005533938 bash[90438]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 24 13:20:44 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate[90453]: --> ceph-volume raw activate successful for osd ID: 2
Nov 24 13:20:44 np0005533938 bash[90438]: --> ceph-volume raw activate successful for osd ID: 2
Nov 24 13:20:44 np0005533938 systemd[1]: libpod-9077f4fb005d5e613a94a4facab79446737f61280eb2394ea1d5afe9fc1e924a.scope: Deactivated successfully.
Nov 24 13:20:44 np0005533938 podman[90438]: 2025-11-24 18:20:44.566094152 +0000 UTC m=+1.316523681 container died 9077f4fb005d5e613a94a4facab79446737f61280eb2394ea1d5afe9fc1e924a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:20:44 np0005533938 systemd[1]: libpod-9077f4fb005d5e613a94a4facab79446737f61280eb2394ea1d5afe9fc1e924a.scope: Consumed 1.110s CPU time.
Nov 24 13:20:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 24 13:20:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e11 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 13:20:44 np0005533938 ceph-mon[74927]: osd.0 [v2:192.168.122.100:6802/3004045453,v1:192.168.122.100:6803/3004045453] boot
Nov 24 13:20:44 np0005533938 ceph-mgr[75218]: [devicehealth INFO root] creating mgr pool
Nov 24 13:20:45 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Nov 24 13:20:45 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 24 13:20:45 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Nov 24 13:20:45 np0005533938 systemd[1]: var-lib-containers-storage-overlay-cf3978808a570ed4f80861a1985084b6e1ae4a8e02a9119bcb3fd3f356753bf3-merged.mount: Deactivated successfully.
Nov 24 13:20:45 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Nov 24 13:20:45 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 13:20:45 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 13:20:45 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 13:20:45 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 13:20:45 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 13:20:45 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 13:20:45 np0005533938 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1623794735; not ready for session (expect reconnect)
Nov 24 13:20:45 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 13:20:45 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 13:20:45 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 13:20:45 np0005533938 podman[90438]: 2025-11-24 18:20:45.579868915 +0000 UTC m=+2.330298444 container remove 9077f4fb005d5e613a94a4facab79446737f61280eb2394ea1d5afe9fc1e924a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:20:45 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 24 13:20:45 np0005533938 podman[90636]: 2025-11-24 18:20:45.783353084 +0000 UTC m=+0.021266688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:46 np0005533938 podman[90636]: 2025-11-24 18:20:46.066404012 +0000 UTC m=+0.304317586 container create d4b4bd73407edf8b64315195325242832b213f100b6f7c4e8a80bbdc340ec673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 13:20:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 24 13:20:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e12 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 24 13:20:46 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 24 13:20:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Nov 24 13:20:46 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63cecfee427ea3d2bf3c04dc6c2b5a0a565c00ff072d537fb33dfb3b5565254c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:46 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63cecfee427ea3d2bf3c04dc6c2b5a0a565c00ff072d537fb33dfb3b5565254c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:46 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63cecfee427ea3d2bf3c04dc6c2b5a0a565c00ff072d537fb33dfb3b5565254c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:46 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63cecfee427ea3d2bf3c04dc6c2b5a0a565c00ff072d537fb33dfb3b5565254c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:46 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63cecfee427ea3d2bf3c04dc6c2b5a0a565c00ff072d537fb33dfb3b5565254c/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e13 crush map has features 3314933000852226048, adjusting msgr requires
Nov 24 13:20:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e13 crush map has features 288514051259236352, adjusting msgr requires
Nov 24 13:20:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e13 crush map has features 288514051259236352, adjusting msgr requires
Nov 24 13:20:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e13 crush map has features 288514051259236352, adjusting msgr requires
Nov 24 13:20:46 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Nov 24 13:20:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 13:20:46 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 13:20:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 13:20:46 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 13:20:46 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 13:20:46 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 13:20:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Nov 24 13:20:46 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 24 13:20:46 np0005533938 podman[90636]: 2025-11-24 18:20:46.323402204 +0000 UTC m=+0.561315798 container init d4b4bd73407edf8b64315195325242832b213f100b6f7c4e8a80bbdc340ec673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 24 13:20:46 np0005533938 podman[90636]: 2025-11-24 18:20:46.328870222 +0000 UTC m=+0.566783816 container start d4b4bd73407edf8b64315195325242832b213f100b6f7c4e8a80bbdc340ec673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 24 13:20:46 np0005533938 ceph-osd[90655]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 13:20:46 np0005533938 ceph-osd[90655]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 24 13:20:46 np0005533938 ceph-osd[90655]: pidfile_write: ignore empty --pid-file
Nov 24 13:20:46 np0005533938 ceph-osd[90655]: bdev(0x55685d8b7800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 24 13:20:46 np0005533938 ceph-osd[90655]: bdev(0x55685d8b7800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 24 13:20:46 np0005533938 ceph-osd[90655]: bdev(0x55685d8b7800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 13:20:46 np0005533938 ceph-osd[90655]: bdev(0x55685d8b7800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 13:20:46 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 13:20:46 np0005533938 ceph-osd[90655]: bdev(0x55685e6ef800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 24 13:20:46 np0005533938 ceph-osd[90655]: bdev(0x55685e6ef800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 24 13:20:46 np0005533938 ceph-osd[90655]: bdev(0x55685e6ef800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 13:20:46 np0005533938 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1623794735; not ready for session (expect reconnect)
Nov 24 13:20:46 np0005533938 ceph-osd[90655]: bdev(0x55685e6ef800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 13:20:46 np0005533938 ceph-osd[90655]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 24 13:20:46 np0005533938 ceph-osd[90655]: bdev(0x55685e6ef800 /var/lib/ceph/osd/ceph-2/block) close
Nov 24 13:20:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 13:20:46 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 13:20:46 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 13:20:46 np0005533938 ceph-osd[88544]: osd.0 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 24 13:20:46 np0005533938 ceph-osd[88544]: osd.0 13 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 24 13:20:46 np0005533938 ceph-osd[88544]: osd.0 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 24 13:20:46 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v40: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 24 13:20:46 np0005533938 ceph-osd[90655]: bdev(0x55685d8b7800 /var/lib/ceph/osd/ceph-2/block) close
Nov 24 13:20:46 np0005533938 bash[90636]: d4b4bd73407edf8b64315195325242832b213f100b6f7c4e8a80bbdc340ec673
Nov 24 13:20:46 np0005533938 systemd[1]: Started Ceph osd.2 for e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 13:20:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:20:46 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:20:46 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:46 np0005533938 ceph-osd[90655]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Nov 24 13:20:46 np0005533938 ceph-osd[90655]: load: jerasure load: lrc 
Nov 24 13:20:46 np0005533938 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 24 13:20:46 np0005533938 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 24 13:20:46 np0005533938 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 13:20:46 np0005533938 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 13:20:46 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 13:20:46 np0005533938 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 24 13:20:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 24 13:20:47 np0005533938 ceph-mon[74927]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 13:20:47 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 24 13:20:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e14 e14: 3 total, 1 up, 3 in
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 24 13:20:47 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 1 up, 3 in
Nov 24 13:20:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 13:20:47 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 13:20:47 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 13:20:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 13:20:47 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 13:20:47 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 13:20:47 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 24 13:20:47 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 24 13:20:47 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:47 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:47 np0005533938 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1623794735; not ready for session (expect reconnect)
Nov 24 13:20:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 13:20:47 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 13:20:47 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 13:20:47 np0005533938 podman[90816]: 2025-11-24 18:20:47.389388836 +0000 UTC m=+0.086734101 container create 565bc92e0f24fe0aa540a07cad32aef02936870adc6d74c9b1fcdf39bb3e389f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 13:20:47 np0005533938 podman[90816]: 2025-11-24 18:20:47.323720078 +0000 UTC m=+0.021065353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bdev(0x55685d926c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bdev(0x55685d927400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bdev(0x55685d927400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bdev(0x55685d927400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bdev(0x55685d927400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bluefs mount
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bluefs mount shared_bdev_used = 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: RocksDB version: 7.9.2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Git sha 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: DB SUMMARY
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: DB Session ID:  J55JOOGKCSODWZHIF7GR
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: CURRENT file:  CURRENT
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                         Options.error_if_exists: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.create_if_missing: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                                     Options.env: 0x55685e741d50
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                                Options.info_log: 0x55685d942ba0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                              Options.statistics: (nil)
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.use_fsync: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                              Options.db_log_dir: 
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                                 Options.wal_dir: db.wal
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.write_buffer_manager: 0x55685e84c460
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.unordered_write: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.row_cache: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                              Options.wal_filter: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.two_write_queues: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.wal_compression: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.atomic_flush: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.max_background_jobs: 4
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.max_background_compactions: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.max_subcompactions: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.max_open_files: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Compression algorithms supported:
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: #011kZSTD supported: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: #011kXpressCompression supported: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: #011kBZip2Compression supported: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: #011kLZ4Compression supported: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: #011kZlibCompression supported: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: #011kSnappyCompression supported: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55685d92add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55685d92add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55685d92add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55685d92add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55685d92add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55685d92add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55685d92add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55685d92a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55685d92a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55685d92a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 17641e21-a0a3-419d-be68-bf5701bf60bf
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008447465799, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008447466191, "job": 1, "event": "recovery_finished"}
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: freelist init
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: freelist _read_cfg
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bluefs umount
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bdev(0x55685d927400 /var/lib/ceph/osd/ceph-2/block) close
Nov 24 13:20:47 np0005533938 systemd[1]: Started libpod-conmon-565bc92e0f24fe0aa540a07cad32aef02936870adc6d74c9b1fcdf39bb3e389f.scope.
Nov 24 13:20:47 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:47 np0005533938 podman[90816]: 2025-11-24 18:20:47.554959098 +0000 UTC m=+0.252304373 container init 565bc92e0f24fe0aa540a07cad32aef02936870adc6d74c9b1fcdf39bb3e389f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hodgkin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:20:47 np0005533938 podman[90816]: 2025-11-24 18:20:47.577007365 +0000 UTC m=+0.274352620 container start 565bc92e0f24fe0aa540a07cad32aef02936870adc6d74c9b1fcdf39bb3e389f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:20:47 np0005533938 kind_hodgkin[91026]: 167 167
Nov 24 13:20:47 np0005533938 systemd[1]: libpod-565bc92e0f24fe0aa540a07cad32aef02936870adc6d74c9b1fcdf39bb3e389f.scope: Deactivated successfully.
Nov 24 13:20:47 np0005533938 conmon[91026]: conmon 565bc92e0f24fe0aa540 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-565bc92e0f24fe0aa540a07cad32aef02936870adc6d74c9b1fcdf39bb3e389f.scope/container/memory.events
Nov 24 13:20:47 np0005533938 podman[90816]: 2025-11-24 18:20:47.608620133 +0000 UTC m=+0.305965408 container attach 565bc92e0f24fe0aa540a07cad32aef02936870adc6d74c9b1fcdf39bb3e389f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hodgkin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 13:20:47 np0005533938 podman[90816]: 2025-11-24 18:20:47.610025348 +0000 UTC m=+0.307370643 container died 565bc92e0f24fe0aa540a07cad32aef02936870adc6d74c9b1fcdf39bb3e389f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bdev(0x55685d927400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bdev(0x55685d927400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bdev(0x55685d927400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bdev(0x55685d927400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bluefs mount
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bluefs mount shared_bdev_used = 4718592
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: RocksDB version: 7.9.2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Git sha 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: DB SUMMARY
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: DB Session ID:  J55JOOGKCSODWZHIF7GQ
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: CURRENT file:  CURRENT
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: IDENTITY file:  IDENTITY
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                         Options.error_if_exists: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.create_if_missing: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                         Options.paranoid_checks: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                                     Options.env: 0x55685e902b60
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                                Options.info_log: 0x55685d942900
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.max_file_opening_threads: 16
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                              Options.statistics: (nil)
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.use_fsync: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.max_log_file_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                         Options.allow_fallocate: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.use_direct_reads: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.create_missing_column_families: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                              Options.db_log_dir: 
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                                 Options.wal_dir: db.wal
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.advise_random_on_open: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.write_buffer_manager: 0x55685e84ca00
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                            Options.rate_limiter: (nil)
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.unordered_write: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.row_cache: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                              Options.wal_filter: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.allow_ingest_behind: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.two_write_queues: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.manual_wal_flush: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.wal_compression: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.atomic_flush: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.log_readahead_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.allow_data_in_errors: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.db_host_id: __hostname__
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.max_background_jobs: 4
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.max_background_compactions: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.max_subcompactions: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.max_open_files: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.bytes_per_sync: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.max_background_flushes: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Compression algorithms supported:
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: #011kZSTD supported: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: #011kXpressCompression supported: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: #011kBZip2Compression supported: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: #011kLZ4Compression supported: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: #011kZlibCompression supported: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: #011kSnappyCompression supported: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d942d60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55685d92add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d942d60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55685d92add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d942d60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55685d92add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d942d60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55685d92add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d942d60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55685d92add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d942d60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55685d92add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d942d60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55685d92add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943320)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55685d92a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943320)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55685d92a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:           Options.merge_operator: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.compaction_filter_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.sst_partitioner_factory: None
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55685d943320)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55685d92a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.write_buffer_size: 16777216
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.max_write_buffer_number: 64
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.compression: LZ4
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.num_levels: 7
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.level: 32767
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.compression_opts.strategy: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                  Options.compression_opts.enabled: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.arena_block_size: 1048576
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.disable_auto_compactions: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.inplace_update_support: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.bloom_locality: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                    Options.max_successive_merges: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.paranoid_file_checks: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.force_consistency_checks: 1
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.report_bg_io_stats: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                               Options.ttl: 2592000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                       Options.enable_blob_files: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                           Options.min_blob_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                          Options.blob_file_size: 268435456
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb:                Options.blob_file_starting_level: 0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 17641e21-a0a3-419d-be68-bf5701bf60bf
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008447724139, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008447759872, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008447, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "17641e21-a0a3-419d-be68-bf5701bf60bf", "db_session_id": "J55JOOGKCSODWZHIF7GQ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:20:47 np0005533938 systemd[1]: var-lib-containers-storage-overlay-df023e9f9c3fd2b6f028f43d30a034f34d8d777c5d220a05cb0c774d3cc96883-merged.mount: Deactivated successfully.
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008447786973, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008447, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "17641e21-a0a3-419d-be68-bf5701bf60bf", "db_session_id": "J55JOOGKCSODWZHIF7GQ", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008447791195, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008447, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "17641e21-a0a3-419d-be68-bf5701bf60bf", "db_session_id": "J55JOOGKCSODWZHIF7GQ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008447810109, "job": 1, "event": "recovery_finished"}
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 24 13:20:47 np0005533938 podman[90816]: 2025-11-24 18:20:47.818034672 +0000 UTC m=+0.515379927 container remove 565bc92e0f24fe0aa540a07cad32aef02936870adc6d74c9b1fcdf39bb3e389f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Nov 24 13:20:47 np0005533938 systemd[1]: libpod-conmon-565bc92e0f24fe0aa540a07cad32aef02936870adc6d74c9b1fcdf39bb3e389f.scope: Deactivated successfully.
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55685e932000
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: DB pointer 0x55685d965a00
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.2 total, 0.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 460.80 MB usag
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: _get_class not permitted to load lua
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: _get_class not permitted to load sdk
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: _get_class not permitted to load test_remote_reads
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: osd.2 0 load_pgs
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: osd.2 0 load_pgs opened 0 pgs
Nov 24 13:20:47 np0005533938 ceph-osd[90655]: osd.2 0 log_to_monitors true
Nov 24 13:20:47 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2[90651]: 2025-11-24T18:20:47.927+0000 7f3fbfecf740 -1 osd.2 0 log_to_monitors true
Nov 24 13:20:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 24 13:20:47 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2111577097,v1:192.168.122.100:6811/2111577097]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 24 13:20:47 np0005533938 podman[91232]: 2025-11-24 18:20:47.971862077 +0000 UTC m=+0.048355262 container create e02237377b479f620bd5b427969d53b72168db5649c9e8fa419b561cc100f3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:20:48 np0005533938 systemd[1]: Started libpod-conmon-e02237377b479f620bd5b427969d53b72168db5649c9e8fa419b561cc100f3d0.scope.
Nov 24 13:20:48 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:48 np0005533938 podman[91232]: 2025-11-24 18:20:47.954424587 +0000 UTC m=+0.030917802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:48 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23563c8a49868f5b3a9440d59a58e33a101701a8eed0befc65088c165f380fcf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:48 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23563c8a49868f5b3a9440d59a58e33a101701a8eed0befc65088c165f380fcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:48 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23563c8a49868f5b3a9440d59a58e33a101701a8eed0befc65088c165f380fcf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:48 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23563c8a49868f5b3a9440d59a58e33a101701a8eed0befc65088c165f380fcf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:48 np0005533938 podman[91232]: 2025-11-24 18:20:48.070035026 +0000 UTC m=+0.146528231 container init e02237377b479f620bd5b427969d53b72168db5649c9e8fa419b561cc100f3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:20:48 np0005533938 podman[91232]: 2025-11-24 18:20:48.077790752 +0000 UTC m=+0.154283937 container start e02237377b479f620bd5b427969d53b72168db5649c9e8fa419b561cc100f3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:20:48 np0005533938 podman[91232]: 2025-11-24 18:20:48.083101266 +0000 UTC m=+0.159594451 container attach e02237377b479f620bd5b427969d53b72168db5649c9e8fa419b561cc100f3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:20:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 24 13:20:48 np0005533938 ceph-mon[74927]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 13:20:48 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 24 13:20:48 np0005533938 ceph-mon[74927]: from='osd.2 [v2:192.168.122.100:6810/2111577097,v1:192.168.122.100:6811/2111577097]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 24 13:20:48 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2111577097,v1:192.168.122.100:6811/2111577097]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 24 13:20:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e15 e15: 3 total, 1 up, 3 in
Nov 24 13:20:48 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 1 up, 3 in
Nov 24 13:20:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 24 13:20:48 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2111577097,v1:192.168.122.100:6811/2111577097]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 24 13:20:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e15 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 24 13:20:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 13:20:48 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 13:20:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 13:20:48 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 13:20:48 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 13:20:48 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 13:20:48 np0005533938 ceph-osd[89581]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 40.387 iops: 10338.975 elapsed_sec: 0.290
Nov 24 13:20:48 np0005533938 ceph-osd[89581]: log_channel(cluster) log [WRN] : OSD bench result of 10338.975085 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 13:20:48 np0005533938 ceph-osd[89581]: osd.1 0 waiting for initial osdmap
Nov 24 13:20:48 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1[89552]: 2025-11-24T18:20:48.232+0000 7f4afe579640 -1 osd.1 0 waiting for initial osdmap
Nov 24 13:20:48 np0005533938 ceph-osd[89581]: osd.1 15 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 24 13:20:48 np0005533938 ceph-osd[89581]: osd.1 15 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 24 13:20:48 np0005533938 ceph-osd[89581]: osd.1 15 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 24 13:20:48 np0005533938 ceph-osd[89581]: osd.1 15 check_osdmap_features require_osd_release unknown -> reef
Nov 24 13:20:48 np0005533938 ceph-osd[89581]: osd.1 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 24 13:20:48 np0005533938 ceph-osd[89581]: osd.1 15 set_numa_affinity not setting numa affinity
Nov 24 13:20:48 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-1[89552]: 2025-11-24T18:20:48.262+0000 7f4af9ba1640 -1 osd.1 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 24 13:20:48 np0005533938 ceph-osd[89581]: osd.1 15 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Nov 24 13:20:48 np0005533938 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1623794735; not ready for session (expect reconnect)
Nov 24 13:20:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 13:20:48 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 13:20:48 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 24 13:20:48 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v43: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 24 13:20:48 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 24 13:20:48 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 24 13:20:49 np0005533938 affectionate_rhodes[91282]: {
Nov 24 13:20:49 np0005533938 affectionate_rhodes[91282]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:20:49 np0005533938 affectionate_rhodes[91282]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:20:49 np0005533938 affectionate_rhodes[91282]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:20:49 np0005533938 affectionate_rhodes[91282]:        "osd_id": 0,
Nov 24 13:20:49 np0005533938 affectionate_rhodes[91282]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:20:49 np0005533938 affectionate_rhodes[91282]:        "type": "bluestore"
Nov 24 13:20:49 np0005533938 affectionate_rhodes[91282]:    },
Nov 24 13:20:49 np0005533938 affectionate_rhodes[91282]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:20:49 np0005533938 affectionate_rhodes[91282]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:20:49 np0005533938 affectionate_rhodes[91282]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:20:49 np0005533938 affectionate_rhodes[91282]:        "osd_id": 1,
Nov 24 13:20:49 np0005533938 affectionate_rhodes[91282]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:20:49 np0005533938 affectionate_rhodes[91282]:        "type": "bluestore"
Nov 24 13:20:49 np0005533938 affectionate_rhodes[91282]:    },
Nov 24 13:20:49 np0005533938 affectionate_rhodes[91282]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:20:49 np0005533938 affectionate_rhodes[91282]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:20:49 np0005533938 affectionate_rhodes[91282]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:20:49 np0005533938 affectionate_rhodes[91282]:        "osd_id": 2,
Nov 24 13:20:49 np0005533938 affectionate_rhodes[91282]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:20:49 np0005533938 affectionate_rhodes[91282]:        "type": "bluestore"
Nov 24 13:20:49 np0005533938 affectionate_rhodes[91282]:    }
Nov 24 13:20:49 np0005533938 affectionate_rhodes[91282]: }
Nov 24 13:20:49 np0005533938 systemd[1]: libpod-e02237377b479f620bd5b427969d53b72168db5649c9e8fa419b561cc100f3d0.scope: Deactivated successfully.
Nov 24 13:20:49 np0005533938 systemd[1]: libpod-e02237377b479f620bd5b427969d53b72168db5649c9e8fa419b561cc100f3d0.scope: Consumed 1.069s CPU time.
Nov 24 13:20:49 np0005533938 podman[91316]: 2025-11-24 18:20:49.191218532 +0000 UTC m=+0.029427114 container died e02237377b479f620bd5b427969d53b72168db5649c9e8fa419b561cc100f3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rhodes, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 13:20:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 24 13:20:49 np0005533938 systemd[1]: var-lib-containers-storage-overlay-23563c8a49868f5b3a9440d59a58e33a101701a8eed0befc65088c165f380fcf-merged.mount: Deactivated successfully.
Nov 24 13:20:49 np0005533938 ceph-mon[74927]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 24 13:20:49 np0005533938 ceph-mon[74927]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 24 13:20:49 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2111577097,v1:192.168.122.100:6811/2111577097]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 24 13:20:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Nov 24 13:20:49 np0005533938 ceph-osd[90655]: osd.2 0 done with init, starting boot process
Nov 24 13:20:49 np0005533938 ceph-osd[90655]: osd.2 0 start_boot
Nov 24 13:20:49 np0005533938 ceph-osd[90655]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 24 13:20:49 np0005533938 ceph-osd[90655]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 24 13:20:49 np0005533938 ceph-osd[90655]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 24 13:20:49 np0005533938 ceph-osd[90655]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 24 13:20:49 np0005533938 ceph-osd[90655]: osd.2 0  bench count 12288000 bsize 4 KiB
Nov 24 13:20:49 np0005533938 ceph-mon[74927]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/1623794735,v1:192.168.122.100:6807/1623794735] boot
Nov 24 13:20:49 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Nov 24 13:20:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 24 13:20:49 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 24 13:20:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 13:20:49 np0005533938 ceph-osd[89581]: osd.1 16 state: booting -> active
Nov 24 13:20:49 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 13:20:49 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 16 pg[1.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 pi=[13,16)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:20:49 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 13:20:49 np0005533938 podman[91316]: 2025-11-24 18:20:49.252936641 +0000 UTC m=+0.091145193 container remove e02237377b479f620bd5b427969d53b72168db5649c9e8fa419b561cc100f3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rhodes, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 13:20:49 np0005533938 ceph-mon[74927]: from='osd.2 [v2:192.168.122.100:6810/2111577097,v1:192.168.122.100:6811/2111577097]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 24 13:20:49 np0005533938 ceph-mon[74927]: from='osd.2 [v2:192.168.122.100:6810/2111577097,v1:192.168.122.100:6811/2111577097]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 24 13:20:49 np0005533938 ceph-mon[74927]: OSD bench result of 10338.975085 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 13:20:49 np0005533938 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2111577097; not ready for session (expect reconnect)
Nov 24 13:20:49 np0005533938 systemd[1]: libpod-conmon-e02237377b479f620bd5b427969d53b72168db5649c9e8fa419b561cc100f3d0.scope: Deactivated successfully.
Nov 24 13:20:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 13:20:49 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 13:20:49 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 13:20:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:20:49 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:20:49 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 24 13:20:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Nov 24 13:20:50 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Nov 24 13:20:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 13:20:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 13:20:50 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 13:20:50 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 17 pg[1.0( empty local-lis/les=16/17 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 pi=[13,16)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:20:50 np0005533938 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2111577097; not ready for session (expect reconnect)
Nov 24 13:20:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 13:20:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 13:20:50 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 13:20:50 np0005533938 ceph-mon[74927]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 24 13:20:50 np0005533938 ceph-mon[74927]: Cluster is now healthy
Nov 24 13:20:50 np0005533938 ceph-mon[74927]: from='osd.2 [v2:192.168.122.100:6810/2111577097,v1:192.168.122.100:6811/2111577097]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 24 13:20:50 np0005533938 ceph-mon[74927]: osd.1 [v2:192.168.122.100:6806/1623794735,v1:192.168.122.100:6807/1623794735] boot
Nov 24 13:20:50 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:50 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:50 np0005533938 ceph-mgr[75218]: [devicehealth INFO root] creating main.db for devicehealth
Nov 24 13:20:50 np0005533938 podman[91555]: 2025-11-24 18:20:50.392772388 +0000 UTC m=+0.079425247 container exec 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 13:20:50 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 24 13:20:50 np0005533938 podman[91555]: 2025-11-24 18:20:50.491287006 +0000 UTC m=+0.177939845 container exec_died 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 24 13:20:50 np0005533938 ceph-mgr[75218]: [devicehealth INFO root] Check health
Nov 24 13:20:50 np0005533938 ceph-mgr[75218]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Nov 24 13:20:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 24 13:20:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 24 13:20:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 24 13:20:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 24 13:20:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:20:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:20:51 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:51 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 24 13:20:51 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Nov 24 13:20:51 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Nov 24 13:20:51 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 13:20:51 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 13:20:51 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 13:20:51 np0005533938 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2111577097; not ready for session (expect reconnect)
Nov 24 13:20:51 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 13:20:51 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 13:20:51 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 13:20:51 np0005533938 ceph-mon[74927]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 24 13:20:51 np0005533938 ceph-mon[74927]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 24 13:20:51 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:51 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:52 np0005533938 ceph-mgr[75218]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2111577097; not ready for session (expect reconnect)
Nov 24 13:20:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 13:20:52 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 13:20:52 np0005533938 ceph-mgr[75218]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 24 13:20:52 np0005533938 ceph-osd[90655]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 33.600 iops: 8601.706 elapsed_sec: 0.349
Nov 24 13:20:52 np0005533938 ceph-osd[90655]: log_channel(cluster) log [WRN] : OSD bench result of 8601.705863 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 13:20:52 np0005533938 ceph-osd[90655]: osd.2 0 waiting for initial osdmap
Nov 24 13:20:52 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2[90651]: 2025-11-24T18:20:52.271+0000 7f3fbbe4f640 -1 osd.2 0 waiting for initial osdmap
Nov 24 13:20:52 np0005533938 ceph-osd[90655]: osd.2 18 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 24 13:20:52 np0005533938 ceph-osd[90655]: osd.2 18 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 24 13:20:52 np0005533938 ceph-osd[90655]: osd.2 18 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 24 13:20:52 np0005533938 ceph-osd[90655]: osd.2 18 check_osdmap_features require_osd_release unknown -> reef
Nov 24 13:20:52 np0005533938 ceph-osd[90655]: osd.2 18 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 24 13:20:52 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-osd-2[90651]: 2025-11-24T18:20:52.301+0000 7f3fb7477640 -1 osd.2 18 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 24 13:20:52 np0005533938 ceph-osd[90655]: osd.2 18 set_numa_affinity not setting numa affinity
Nov 24 13:20:52 np0005533938 ceph-osd[90655]: osd.2 18 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Nov 24 13:20:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 24 13:20:52 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.dfqptp(active, since 78s)
Nov 24 13:20:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Nov 24 13:20:52 np0005533938 ceph-mon[74927]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/2111577097,v1:192.168.122.100:6811/2111577097] boot
Nov 24 13:20:52 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Nov 24 13:20:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 24 13:20:52 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 24 13:20:52 np0005533938 ceph-osd[90655]: osd.2 19 state: booting -> active
Nov 24 13:20:52 np0005533938 podman[91960]: 2025-11-24 18:20:52.43318039 +0000 UTC m=+0.048085986 container create bea56f8f48679a31074a5f47ea411add2b07d505a119bb4dfd6ff353c1c50739 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 13:20:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:20:52 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 24 13:20:52 np0005533938 systemd[1]: Started libpod-conmon-bea56f8f48679a31074a5f47ea411add2b07d505a119bb4dfd6ff353c1c50739.scope.
Nov 24 13:20:52 np0005533938 podman[91960]: 2025-11-24 18:20:52.413627216 +0000 UTC m=+0.028532822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:52 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:52 np0005533938 podman[91960]: 2025-11-24 18:20:52.529755609 +0000 UTC m=+0.144661225 container init bea56f8f48679a31074a5f47ea411add2b07d505a119bb4dfd6ff353c1c50739 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_rubin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:20:52 np0005533938 podman[91960]: 2025-11-24 18:20:52.539004452 +0000 UTC m=+0.153910038 container start bea56f8f48679a31074a5f47ea411add2b07d505a119bb4dfd6ff353c1c50739 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_rubin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:20:52 np0005533938 podman[91960]: 2025-11-24 18:20:52.542776007 +0000 UTC m=+0.157681623 container attach bea56f8f48679a31074a5f47ea411add2b07d505a119bb4dfd6ff353c1c50739 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_rubin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:20:52 np0005533938 blissful_rubin[91977]: 167 167
Nov 24 13:20:52 np0005533938 systemd[1]: libpod-bea56f8f48679a31074a5f47ea411add2b07d505a119bb4dfd6ff353c1c50739.scope: Deactivated successfully.
Nov 24 13:20:52 np0005533938 podman[91960]: 2025-11-24 18:20:52.54524145 +0000 UTC m=+0.160147046 container died bea56f8f48679a31074a5f47ea411add2b07d505a119bb4dfd6ff353c1c50739 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 13:20:52 np0005533938 systemd[1]: var-lib-containers-storage-overlay-424f0cfb8047e88bd237df49aad6fbdc18081e6fa9769c57703a93f45def1a86-merged.mount: Deactivated successfully.
Nov 24 13:20:52 np0005533938 podman[91960]: 2025-11-24 18:20:52.577841583 +0000 UTC m=+0.192747159 container remove bea56f8f48679a31074a5f47ea411add2b07d505a119bb4dfd6ff353c1c50739 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:20:52 np0005533938 systemd[1]: libpod-conmon-bea56f8f48679a31074a5f47ea411add2b07d505a119bb4dfd6ff353c1c50739.scope: Deactivated successfully.
Nov 24 13:20:52 np0005533938 podman[91999]: 2025-11-24 18:20:52.774873959 +0000 UTC m=+0.078296148 container create 271188e4b2b97ecc07337523b2c0faed27d75f243567097df89c4ff3e7be49f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_williams, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 13:20:52 np0005533938 systemd[1]: Started libpod-conmon-271188e4b2b97ecc07337523b2c0faed27d75f243567097df89c4ff3e7be49f4.scope.
Nov 24 13:20:52 np0005533938 podman[91999]: 2025-11-24 18:20:52.748628416 +0000 UTC m=+0.052050645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:52 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:52 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e4568dfb82019716305bd8de0361a8cbb222f3bed44521e7625995797c52810/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:52 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e4568dfb82019716305bd8de0361a8cbb222f3bed44521e7625995797c52810/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:52 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e4568dfb82019716305bd8de0361a8cbb222f3bed44521e7625995797c52810/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:52 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e4568dfb82019716305bd8de0361a8cbb222f3bed44521e7625995797c52810/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:52 np0005533938 podman[91999]: 2025-11-24 18:20:52.86162533 +0000 UTC m=+0.165047499 container init 271188e4b2b97ecc07337523b2c0faed27d75f243567097df89c4ff3e7be49f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 13:20:52 np0005533938 podman[91999]: 2025-11-24 18:20:52.867917339 +0000 UTC m=+0.171339508 container start 271188e4b2b97ecc07337523b2c0faed27d75f243567097df89c4ff3e7be49f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_williams, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 13:20:52 np0005533938 podman[91999]: 2025-11-24 18:20:52.871622853 +0000 UTC m=+0.175045022 container attach 271188e4b2b97ecc07337523b2c0faed27d75f243567097df89c4ff3e7be49f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:20:53 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 24 13:20:53 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Nov 24 13:20:53 np0005533938 ceph-mon[74927]: OSD bench result of 8601.705863 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 24 13:20:53 np0005533938 ceph-mon[74927]: osd.2 [v2:192.168.122.100:6810/2111577097,v1:192.168.122.100:6811/2111577097] boot
Nov 24 13:20:53 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Nov 24 13:20:54 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:20:54 np0005533938 charming_williams[92015]: [
Nov 24 13:20:54 np0005533938 charming_williams[92015]:    {
Nov 24 13:20:54 np0005533938 charming_williams[92015]:        "available": false,
Nov 24 13:20:54 np0005533938 charming_williams[92015]:        "ceph_device": false,
Nov 24 13:20:54 np0005533938 charming_williams[92015]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 24 13:20:54 np0005533938 charming_williams[92015]:        "lsm_data": {},
Nov 24 13:20:54 np0005533938 charming_williams[92015]:        "lvs": [],
Nov 24 13:20:54 np0005533938 charming_williams[92015]:        "path": "/dev/sr0",
Nov 24 13:20:54 np0005533938 charming_williams[92015]:        "rejected_reasons": [
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "Has a FileSystem",
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "Insufficient space (<5GB)"
Nov 24 13:20:54 np0005533938 charming_williams[92015]:        ],
Nov 24 13:20:54 np0005533938 charming_williams[92015]:        "sys_api": {
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "actuators": null,
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "device_nodes": "sr0",
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "devname": "sr0",
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "human_readable_size": "482.00 KB",
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "id_bus": "ata",
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "model": "QEMU DVD-ROM",
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "nr_requests": "2",
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "parent": "/dev/sr0",
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "partitions": {},
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "path": "/dev/sr0",
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "removable": "1",
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "rev": "2.5+",
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "ro": "0",
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "rotational": "1",
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "sas_address": "",
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "sas_device_handle": "",
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "scheduler_mode": "mq-deadline",
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "sectors": 0,
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "sectorsize": "2048",
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "size": 493568.0,
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "support_discard": "2048",
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "type": "disk",
Nov 24 13:20:54 np0005533938 charming_williams[92015]:            "vendor": "QEMU"
Nov 24 13:20:54 np0005533938 charming_williams[92015]:        }
Nov 24 13:20:54 np0005533938 charming_williams[92015]:    }
Nov 24 13:20:54 np0005533938 charming_williams[92015]: ]
Nov 24 13:20:54 np0005533938 systemd[1]: libpod-271188e4b2b97ecc07337523b2c0faed27d75f243567097df89c4ff3e7be49f4.scope: Deactivated successfully.
Nov 24 13:20:54 np0005533938 systemd[1]: libpod-271188e4b2b97ecc07337523b2c0faed27d75f243567097df89c4ff3e7be49f4.scope: Consumed 1.699s CPU time.
Nov 24 13:20:54 np0005533938 podman[93983]: 2025-11-24 18:20:54.555771967 +0000 UTC m=+0.034049881 container died 271188e4b2b97ecc07337523b2c0faed27d75f243567097df89c4ff3e7be49f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:20:54 np0005533938 systemd[1]: var-lib-containers-storage-overlay-0e4568dfb82019716305bd8de0361a8cbb222f3bed44521e7625995797c52810-merged.mount: Deactivated successfully.
Nov 24 13:20:54 np0005533938 podman[93983]: 2025-11-24 18:20:54.691068965 +0000 UTC m=+0.169346799 container remove 271188e4b2b97ecc07337523b2c0faed27d75f243567097df89c4ff3e7be49f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:20:54 np0005533938 systemd[1]: libpod-conmon-271188e4b2b97ecc07337523b2c0faed27d75f243567097df89c4ff3e7be49f4.scope: Deactivated successfully.
Nov 24 13:20:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:20:54 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:20:54 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 24 13:20:54 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 24 13:20:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 24 13:20:54 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 24 13:20:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 24 13:20:54 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 24 13:20:54 np0005533938 ceph-mgr[75218]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43690k
Nov 24 13:20:54 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43690k
Nov 24 13:20:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 24 13:20:54 np0005533938 ceph-mgr[75218]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 24 13:20:54 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 24 13:20:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:20:54 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:20:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:20:54 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:20:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:20:54 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:54 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 745cb863-6dfb-4127-9f4c-325ddf584928 does not exist
Nov 24 13:20:54 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 12fab4d6-a8f4-45dd-ac72-fbdc0fa88dcb does not exist
Nov 24 13:20:54 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 280ab2e1-980d-4137-b556-bb21a89a5da2 does not exist
Nov 24 13:20:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:20:54 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:20:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:20:54 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:20:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:20:54 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:20:55 np0005533938 podman[94138]: 2025-11-24 18:20:55.415204793 +0000 UTC m=+0.069878786 container create ca049e8ff7f98f2df01ca95bbdd308791de852534b6b69d142373ba66cda9ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ritchie, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 13:20:55 np0005533938 systemd[1]: Started libpod-conmon-ca049e8ff7f98f2df01ca95bbdd308791de852534b6b69d142373ba66cda9ad8.scope.
Nov 24 13:20:55 np0005533938 podman[94138]: 2025-11-24 18:20:55.375123901 +0000 UTC m=+0.029797944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:55 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:55 np0005533938 podman[94138]: 2025-11-24 18:20:55.4926901 +0000 UTC m=+0.147364153 container init ca049e8ff7f98f2df01ca95bbdd308791de852534b6b69d142373ba66cda9ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ritchie, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:20:55 np0005533938 podman[94138]: 2025-11-24 18:20:55.502653622 +0000 UTC m=+0.157327625 container start ca049e8ff7f98f2df01ca95bbdd308791de852534b6b69d142373ba66cda9ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 13:20:55 np0005533938 podman[94138]: 2025-11-24 18:20:55.506749165 +0000 UTC m=+0.161423168 container attach ca049e8ff7f98f2df01ca95bbdd308791de852534b6b69d142373ba66cda9ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:20:55 np0005533938 stoic_ritchie[94155]: 167 167
Nov 24 13:20:55 np0005533938 systemd[1]: libpod-ca049e8ff7f98f2df01ca95bbdd308791de852534b6b69d142373ba66cda9ad8.scope: Deactivated successfully.
Nov 24 13:20:55 np0005533938 podman[94138]: 2025-11-24 18:20:55.511496915 +0000 UTC m=+0.166170938 container died ca049e8ff7f98f2df01ca95bbdd308791de852534b6b69d142373ba66cda9ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ritchie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:20:55 np0005533938 systemd[1]: var-lib-containers-storage-overlay-5733c1a78f6abf21470e3c5fdf022e65b528b076a1c8a12ceed1854114ac3b73-merged.mount: Deactivated successfully.
Nov 24 13:20:55 np0005533938 podman[94138]: 2025-11-24 18:20:55.555696231 +0000 UTC m=+0.210370254 container remove ca049e8ff7f98f2df01ca95bbdd308791de852534b6b69d142373ba66cda9ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:20:55 np0005533938 systemd[1]: libpod-conmon-ca049e8ff7f98f2df01ca95bbdd308791de852534b6b69d142373ba66cda9ad8.scope: Deactivated successfully.
Nov 24 13:20:55 np0005533938 podman[94178]: 2025-11-24 18:20:55.759282673 +0000 UTC m=+0.055702188 container create f6d19fa10d4f7998821132a5a6c464fb32064df004e4001ac56efbe7c60f1b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hawking, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 13:20:55 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:55 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:55 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 24 13:20:55 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 24 13:20:55 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 24 13:20:55 np0005533938 ceph-mon[74927]: Adjusting osd_memory_target on compute-0 to 43690k
Nov 24 13:20:55 np0005533938 ceph-mon[74927]: Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 24 13:20:55 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:20:55 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:20:55 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:20:55 np0005533938 podman[94178]: 2025-11-24 18:20:55.727987643 +0000 UTC m=+0.024407168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:55 np0005533938 systemd[1]: Started libpod-conmon-f6d19fa10d4f7998821132a5a6c464fb32064df004e4001ac56efbe7c60f1b21.scope.
Nov 24 13:20:55 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:55 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e3b05c6f60e6c5bae49a88ae7940b6744733b29cb273d9db65451fdc8c27ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:55 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e3b05c6f60e6c5bae49a88ae7940b6744733b29cb273d9db65451fdc8c27ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:55 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e3b05c6f60e6c5bae49a88ae7940b6744733b29cb273d9db65451fdc8c27ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:55 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e3b05c6f60e6c5bae49a88ae7940b6744733b29cb273d9db65451fdc8c27ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:55 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4e3b05c6f60e6c5bae49a88ae7940b6744733b29cb273d9db65451fdc8c27ad/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:55 np0005533938 podman[94178]: 2025-11-24 18:20:55.878403992 +0000 UTC m=+0.174823577 container init f6d19fa10d4f7998821132a5a6c464fb32064df004e4001ac56efbe7c60f1b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hawking, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 13:20:55 np0005533938 podman[94178]: 2025-11-24 18:20:55.889771439 +0000 UTC m=+0.186190924 container start f6d19fa10d4f7998821132a5a6c464fb32064df004e4001ac56efbe7c60f1b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hawking, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 13:20:55 np0005533938 podman[94178]: 2025-11-24 18:20:55.893328079 +0000 UTC m=+0.189747604 container attach f6d19fa10d4f7998821132a5a6c464fb32064df004e4001ac56efbe7c60f1b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:20:56 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:20:56 np0005533938 vibrant_hawking[94194]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:20:56 np0005533938 vibrant_hawking[94194]: --> relative data size: 1.0
Nov 24 13:20:56 np0005533938 vibrant_hawking[94194]: --> All data devices are unavailable
Nov 24 13:20:56 np0005533938 systemd[1]: libpod-f6d19fa10d4f7998821132a5a6c464fb32064df004e4001ac56efbe7c60f1b21.scope: Deactivated successfully.
Nov 24 13:20:56 np0005533938 systemd[1]: libpod-f6d19fa10d4f7998821132a5a6c464fb32064df004e4001ac56efbe7c60f1b21.scope: Consumed 1.049s CPU time.
Nov 24 13:20:56 np0005533938 conmon[94194]: conmon f6d19fa10d4f79988211 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f6d19fa10d4f7998821132a5a6c464fb32064df004e4001ac56efbe7c60f1b21.scope/container/memory.events
Nov 24 13:20:56 np0005533938 podman[94178]: 2025-11-24 18:20:56.986817245 +0000 UTC m=+1.283236730 container died f6d19fa10d4f7998821132a5a6c464fb32064df004e4001ac56efbe7c60f1b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hawking, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 13:20:57 np0005533938 systemd[1]: var-lib-containers-storage-overlay-b4e3b05c6f60e6c5bae49a88ae7940b6744733b29cb273d9db65451fdc8c27ad-merged.mount: Deactivated successfully.
Nov 24 13:20:57 np0005533938 podman[94178]: 2025-11-24 18:20:57.049487887 +0000 UTC m=+1.345907372 container remove f6d19fa10d4f7998821132a5a6c464fb32064df004e4001ac56efbe7c60f1b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hawking, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 13:20:57 np0005533938 systemd[1]: libpod-conmon-f6d19fa10d4f7998821132a5a6c464fb32064df004e4001ac56efbe7c60f1b21.scope: Deactivated successfully.
Nov 24 13:20:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:20:57 np0005533938 podman[94374]: 2025-11-24 18:20:57.716084383 +0000 UTC m=+0.043597722 container create 2ffbb208c305cdb6c09feb7ce01d51322dc75fd1a4ded0d32cf765f25684b67a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 24 13:20:57 np0005533938 systemd[1]: Started libpod-conmon-2ffbb208c305cdb6c09feb7ce01d51322dc75fd1a4ded0d32cf765f25684b67a.scope.
Nov 24 13:20:57 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:57 np0005533938 podman[94374]: 2025-11-24 18:20:57.698482718 +0000 UTC m=+0.025996057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:57 np0005533938 podman[94374]: 2025-11-24 18:20:57.794241817 +0000 UTC m=+0.121755156 container init 2ffbb208c305cdb6c09feb7ce01d51322dc75fd1a4ded0d32cf765f25684b67a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:20:57 np0005533938 podman[94374]: 2025-11-24 18:20:57.800222438 +0000 UTC m=+0.127735767 container start 2ffbb208c305cdb6c09feb7ce01d51322dc75fd1a4ded0d32cf765f25684b67a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:20:57 np0005533938 great_hawking[94390]: 167 167
Nov 24 13:20:57 np0005533938 systemd[1]: libpod-2ffbb208c305cdb6c09feb7ce01d51322dc75fd1a4ded0d32cf765f25684b67a.scope: Deactivated successfully.
Nov 24 13:20:57 np0005533938 podman[94374]: 2025-11-24 18:20:57.808376944 +0000 UTC m=+0.135890353 container attach 2ffbb208c305cdb6c09feb7ce01d51322dc75fd1a4ded0d32cf765f25684b67a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 13:20:57 np0005533938 podman[94374]: 2025-11-24 18:20:57.80940463 +0000 UTC m=+0.136917989 container died 2ffbb208c305cdb6c09feb7ce01d51322dc75fd1a4ded0d32cf765f25684b67a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:20:57 np0005533938 systemd[1]: var-lib-containers-storage-overlay-cc9486e0d7c77f0fa427488f88510db8536c59787d286d7944b16421b0f654c6-merged.mount: Deactivated successfully.
Nov 24 13:20:57 np0005533938 podman[94374]: 2025-11-24 18:20:57.846959328 +0000 UTC m=+0.174472657 container remove 2ffbb208c305cdb6c09feb7ce01d51322dc75fd1a4ded0d32cf765f25684b67a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 13:20:57 np0005533938 systemd[1]: libpod-conmon-2ffbb208c305cdb6c09feb7ce01d51322dc75fd1a4ded0d32cf765f25684b67a.scope: Deactivated successfully.
Nov 24 13:20:58 np0005533938 podman[94414]: 2025-11-24 18:20:58.02519249 +0000 UTC m=+0.046829104 container create 54cd14fe99619bb13cd56ff0874dac6b123002c3ea546f25652f580954bcf27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mendel, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:20:58 np0005533938 systemd[1]: Started libpod-conmon-54cd14fe99619bb13cd56ff0874dac6b123002c3ea546f25652f580954bcf27c.scope.
Nov 24 13:20:58 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:20:58 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd3161d93e5ce3ec26c719ae502444e19f4f486c690bb75783a8466ad467457f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:58 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd3161d93e5ce3ec26c719ae502444e19f4f486c690bb75783a8466ad467457f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:58 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd3161d93e5ce3ec26c719ae502444e19f4f486c690bb75783a8466ad467457f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:58 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd3161d93e5ce3ec26c719ae502444e19f4f486c690bb75783a8466ad467457f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:20:58 np0005533938 podman[94414]: 2025-11-24 18:20:58.007715078 +0000 UTC m=+0.029351712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:20:58 np0005533938 podman[94414]: 2025-11-24 18:20:58.106101043 +0000 UTC m=+0.127737657 container init 54cd14fe99619bb13cd56ff0874dac6b123002c3ea546f25652f580954bcf27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:20:58 np0005533938 podman[94414]: 2025-11-24 18:20:58.113773537 +0000 UTC m=+0.135410151 container start 54cd14fe99619bb13cd56ff0874dac6b123002c3ea546f25652f580954bcf27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:20:58 np0005533938 podman[94414]: 2025-11-24 18:20:58.117183913 +0000 UTC m=+0.138820537 container attach 54cd14fe99619bb13cd56ff0874dac6b123002c3ea546f25652f580954bcf27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mendel, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 13:20:58 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]: {
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:    "0": [
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:        {
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "devices": [
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "/dev/loop3"
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            ],
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "lv_name": "ceph_lv0",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "lv_size": "21470642176",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "name": "ceph_lv0",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "tags": {
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.cluster_name": "ceph",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.crush_device_class": "",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.encrypted": "0",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.osd_id": "0",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.type": "block",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.vdo": "0"
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            },
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "type": "block",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "vg_name": "ceph_vg0"
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:        }
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:    ],
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:    "1": [
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:        {
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "devices": [
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "/dev/loop4"
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            ],
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "lv_name": "ceph_lv1",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "lv_size": "21470642176",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "name": "ceph_lv1",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "tags": {
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.cluster_name": "ceph",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.crush_device_class": "",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.encrypted": "0",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.osd_id": "1",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.type": "block",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.vdo": "0"
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            },
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "type": "block",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "vg_name": "ceph_vg1"
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:        }
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:    ],
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:    "2": [
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:        {
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "devices": [
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "/dev/loop5"
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            ],
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "lv_name": "ceph_lv2",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "lv_size": "21470642176",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "name": "ceph_lv2",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "tags": {
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.cluster_name": "ceph",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.crush_device_class": "",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.encrypted": "0",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.osd_id": "2",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.type": "block",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:                "ceph.vdo": "0"
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            },
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "type": "block",
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:            "vg_name": "ceph_vg2"
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:        }
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]:    ]
Nov 24 13:20:58 np0005533938 priceless_mendel[94430]: }
Nov 24 13:20:58 np0005533938 systemd[1]: libpod-54cd14fe99619bb13cd56ff0874dac6b123002c3ea546f25652f580954bcf27c.scope: Deactivated successfully.
Nov 24 13:20:58 np0005533938 podman[94414]: 2025-11-24 18:20:58.906740674 +0000 UTC m=+0.928377308 container died 54cd14fe99619bb13cd56ff0874dac6b123002c3ea546f25652f580954bcf27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mendel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:20:59 np0005533938 systemd[1]: var-lib-containers-storage-overlay-fd3161d93e5ce3ec26c719ae502444e19f4f486c690bb75783a8466ad467457f-merged.mount: Deactivated successfully.
Nov 24 13:20:59 np0005533938 podman[94414]: 2025-11-24 18:20:59.491479872 +0000 UTC m=+1.513116486 container remove 54cd14fe99619bb13cd56ff0874dac6b123002c3ea546f25652f580954bcf27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mendel, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:20:59 np0005533938 systemd[1]: libpod-conmon-54cd14fe99619bb13cd56ff0874dac6b123002c3ea546f25652f580954bcf27c.scope: Deactivated successfully.
Nov 24 13:21:00 np0005533938 podman[94594]: 2025-11-24 18:21:00.134676666 +0000 UTC m=+0.045899541 container create b394f2aced11e9764448e12505d5f542ce07daff3a48933cf57f09172d25da1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:21:00 np0005533938 systemd[1]: Started libpod-conmon-b394f2aced11e9764448e12505d5f542ce07daff3a48933cf57f09172d25da1f.scope.
Nov 24 13:21:00 np0005533938 podman[94594]: 2025-11-24 18:21:00.118486187 +0000 UTC m=+0.029709092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:00 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:00 np0005533938 podman[94594]: 2025-11-24 18:21:00.230754182 +0000 UTC m=+0.141977067 container init b394f2aced11e9764448e12505d5f542ce07daff3a48933cf57f09172d25da1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:21:00 np0005533938 podman[94594]: 2025-11-24 18:21:00.238293032 +0000 UTC m=+0.149515907 container start b394f2aced11e9764448e12505d5f542ce07daff3a48933cf57f09172d25da1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 13:21:00 np0005533938 podman[94594]: 2025-11-24 18:21:00.241501003 +0000 UTC m=+0.152723898 container attach b394f2aced11e9764448e12505d5f542ce07daff3a48933cf57f09172d25da1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 13:21:00 np0005533938 charming_shaw[94610]: 167 167
Nov 24 13:21:00 np0005533938 systemd[1]: libpod-b394f2aced11e9764448e12505d5f542ce07daff3a48933cf57f09172d25da1f.scope: Deactivated successfully.
Nov 24 13:21:00 np0005533938 podman[94615]: 2025-11-24 18:21:00.288310766 +0000 UTC m=+0.027970958 container died b394f2aced11e9764448e12505d5f542ce07daff3a48933cf57f09172d25da1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 13:21:00 np0005533938 systemd[1]: var-lib-containers-storage-overlay-903974524d167e48878049e1ed17f7be1d1d0f52e856d21c7b70676e9c325467-merged.mount: Deactivated successfully.
Nov 24 13:21:00 np0005533938 podman[94615]: 2025-11-24 18:21:00.323931585 +0000 UTC m=+0.063591727 container remove b394f2aced11e9764448e12505d5f542ce07daff3a48933cf57f09172d25da1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:21:00 np0005533938 systemd[1]: libpod-conmon-b394f2aced11e9764448e12505d5f542ce07daff3a48933cf57f09172d25da1f.scope: Deactivated successfully.
Nov 24 13:21:00 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:00 np0005533938 podman[94637]: 2025-11-24 18:21:00.54107714 +0000 UTC m=+0.062151641 container create fbc3cec5810c12314ca0dcd8454f5e3ea3927282807604aa1ddf9b0f449ef64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swartz, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:21:00 np0005533938 systemd[1]: Started libpod-conmon-fbc3cec5810c12314ca0dcd8454f5e3ea3927282807604aa1ddf9b0f449ef64c.scope.
Nov 24 13:21:00 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:00 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cfcebcc329f38eb84e03f6b71987cbd8d453821bd5d78d1084e6d1800488d67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:00 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cfcebcc329f38eb84e03f6b71987cbd8d453821bd5d78d1084e6d1800488d67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:00 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cfcebcc329f38eb84e03f6b71987cbd8d453821bd5d78d1084e6d1800488d67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:00 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cfcebcc329f38eb84e03f6b71987cbd8d453821bd5d78d1084e6d1800488d67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:00 np0005533938 podman[94637]: 2025-11-24 18:21:00.520319245 +0000 UTC m=+0.041393756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:00 np0005533938 podman[94637]: 2025-11-24 18:21:00.62503093 +0000 UTC m=+0.146105481 container init fbc3cec5810c12314ca0dcd8454f5e3ea3927282807604aa1ddf9b0f449ef64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 13:21:00 np0005533938 podman[94637]: 2025-11-24 18:21:00.632893598 +0000 UTC m=+0.153968089 container start fbc3cec5810c12314ca0dcd8454f5e3ea3927282807604aa1ddf9b0f449ef64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 13:21:00 np0005533938 podman[94637]: 2025-11-24 18:21:00.637231398 +0000 UTC m=+0.158305929 container attach fbc3cec5810c12314ca0dcd8454f5e3ea3927282807604aa1ddf9b0f449ef64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 13:21:01 np0005533938 jovial_swartz[94654]: {
Nov 24 13:21:01 np0005533938 jovial_swartz[94654]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:21:01 np0005533938 jovial_swartz[94654]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:21:01 np0005533938 jovial_swartz[94654]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:21:01 np0005533938 jovial_swartz[94654]:        "osd_id": 0,
Nov 24 13:21:01 np0005533938 jovial_swartz[94654]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:21:01 np0005533938 jovial_swartz[94654]:        "type": "bluestore"
Nov 24 13:21:01 np0005533938 jovial_swartz[94654]:    },
Nov 24 13:21:01 np0005533938 jovial_swartz[94654]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:21:01 np0005533938 jovial_swartz[94654]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:21:01 np0005533938 jovial_swartz[94654]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:21:01 np0005533938 jovial_swartz[94654]:        "osd_id": 1,
Nov 24 13:21:01 np0005533938 jovial_swartz[94654]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:21:01 np0005533938 jovial_swartz[94654]:        "type": "bluestore"
Nov 24 13:21:01 np0005533938 jovial_swartz[94654]:    },
Nov 24 13:21:01 np0005533938 jovial_swartz[94654]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:21:01 np0005533938 jovial_swartz[94654]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:21:01 np0005533938 jovial_swartz[94654]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:21:01 np0005533938 jovial_swartz[94654]:        "osd_id": 2,
Nov 24 13:21:01 np0005533938 jovial_swartz[94654]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:21:01 np0005533938 jovial_swartz[94654]:        "type": "bluestore"
Nov 24 13:21:01 np0005533938 jovial_swartz[94654]:    }
Nov 24 13:21:01 np0005533938 jovial_swartz[94654]: }
Nov 24 13:21:01 np0005533938 systemd[1]: libpod-fbc3cec5810c12314ca0dcd8454f5e3ea3927282807604aa1ddf9b0f449ef64c.scope: Deactivated successfully.
Nov 24 13:21:01 np0005533938 systemd[1]: libpod-fbc3cec5810c12314ca0dcd8454f5e3ea3927282807604aa1ddf9b0f449ef64c.scope: Consumed 1.136s CPU time.
Nov 24 13:21:01 np0005533938 podman[94637]: 2025-11-24 18:21:01.759209795 +0000 UTC m=+1.280284296 container died fbc3cec5810c12314ca0dcd8454f5e3ea3927282807604aa1ddf9b0f449ef64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:01 np0005533938 systemd[1]: var-lib-containers-storage-overlay-0cfcebcc329f38eb84e03f6b71987cbd8d453821bd5d78d1084e6d1800488d67-merged.mount: Deactivated successfully.
Nov 24 13:21:01 np0005533938 podman[94637]: 2025-11-24 18:21:01.827607212 +0000 UTC m=+1.348681713 container remove fbc3cec5810c12314ca0dcd8454f5e3ea3927282807604aa1ddf9b0f449ef64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swartz, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 13:21:01 np0005533938 systemd[1]: libpod-conmon-fbc3cec5810c12314ca0dcd8454f5e3ea3927282807604aa1ddf9b0f449ef64c.scope: Deactivated successfully.
Nov 24 13:21:01 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:21:01 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:01 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:21:01 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:21:02 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:02 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:02 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:02 np0005533938 podman[94920]: 2025-11-24 18:21:02.919770945 +0000 UTC m=+0.087567102 container exec 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 13:21:03 np0005533938 podman[94920]: 2025-11-24 18:21:03.004339571 +0000 UTC m=+0.172135638 container exec_died 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:21:03 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:21:03 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:21:03 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:21:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:21:03 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:21:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:21:03 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:03 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev e968221b-d0c6-464e-a60e-fa25faabb32e does not exist
Nov 24 13:21:03 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 24bfc914-75bd-4071-8406-6ef32f80062e does not exist
Nov 24 13:21:03 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev b7f294a3-2f49-4626-aaf9-c3fc39d81ade does not exist
Nov 24 13:21:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:21:03 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:21:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:21:03 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:21:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:21:03 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:21:03 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:03 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:03 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:21:03 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:03 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:21:04 np0005533938 podman[95181]: 2025-11-24 18:21:04.029550502 +0000 UTC m=+0.051821880 container create 6c7d36c9855fd30dc1ffe869388e55e81a8aa3ef8a1bd202a73fe9565700527f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bardeen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 13:21:04 np0005533938 systemd[1]: Started libpod-conmon-6c7d36c9855fd30dc1ffe869388e55e81a8aa3ef8a1bd202a73fe9565700527f.scope.
Nov 24 13:21:04 np0005533938 podman[95181]: 2025-11-24 18:21:04.006041358 +0000 UTC m=+0.028312806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:04 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:04 np0005533938 podman[95181]: 2025-11-24 18:21:04.122799517 +0000 UTC m=+0.145070985 container init 6c7d36c9855fd30dc1ffe869388e55e81a8aa3ef8a1bd202a73fe9565700527f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 24 13:21:04 np0005533938 podman[95181]: 2025-11-24 18:21:04.131023965 +0000 UTC m=+0.153295333 container start 6c7d36c9855fd30dc1ffe869388e55e81a8aa3ef8a1bd202a73fe9565700527f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bardeen, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 13:21:04 np0005533938 podman[95181]: 2025-11-24 18:21:04.134054081 +0000 UTC m=+0.156325489 container attach 6c7d36c9855fd30dc1ffe869388e55e81a8aa3ef8a1bd202a73fe9565700527f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:04 np0005533938 tender_bardeen[95198]: 167 167
Nov 24 13:21:04 np0005533938 systemd[1]: libpod-6c7d36c9855fd30dc1ffe869388e55e81a8aa3ef8a1bd202a73fe9565700527f.scope: Deactivated successfully.
Nov 24 13:21:04 np0005533938 podman[95181]: 2025-11-24 18:21:04.13676537 +0000 UTC m=+0.159036738 container died 6c7d36c9855fd30dc1ffe869388e55e81a8aa3ef8a1bd202a73fe9565700527f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bardeen, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:04 np0005533938 systemd[1]: var-lib-containers-storage-overlay-f968aadb21685555ed9371a1030227bec24a3b259fc9b1722361f339d74bb873-merged.mount: Deactivated successfully.
Nov 24 13:21:04 np0005533938 podman[95181]: 2025-11-24 18:21:04.180163666 +0000 UTC m=+0.202435044 container remove 6c7d36c9855fd30dc1ffe869388e55e81a8aa3ef8a1bd202a73fe9565700527f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bardeen, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 13:21:04 np0005533938 systemd[1]: libpod-conmon-6c7d36c9855fd30dc1ffe869388e55e81a8aa3ef8a1bd202a73fe9565700527f.scope: Deactivated successfully.
Nov 24 13:21:04 np0005533938 podman[95222]: 2025-11-24 18:21:04.346424945 +0000 UTC m=+0.054853526 container create b2f510f59292c25bfcd886ba84b01e155d1cb27b400cc9d1561205fae83b6f9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 13:21:04 np0005533938 systemd[1]: Started libpod-conmon-b2f510f59292c25bfcd886ba84b01e155d1cb27b400cc9d1561205fae83b6f9d.scope.
Nov 24 13:21:04 np0005533938 podman[95222]: 2025-11-24 18:21:04.326674586 +0000 UTC m=+0.035103417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:04 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:04 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010bf65b13ba035b054e7c05ee3ddeb6f840e5222014277e250b308c7dfa4fd5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:04 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010bf65b13ba035b054e7c05ee3ddeb6f840e5222014277e250b308c7dfa4fd5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:04 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010bf65b13ba035b054e7c05ee3ddeb6f840e5222014277e250b308c7dfa4fd5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:04 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010bf65b13ba035b054e7c05ee3ddeb6f840e5222014277e250b308c7dfa4fd5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:04 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010bf65b13ba035b054e7c05ee3ddeb6f840e5222014277e250b308c7dfa4fd5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:04 np0005533938 podman[95222]: 2025-11-24 18:21:04.462800074 +0000 UTC m=+0.171228725 container init b2f510f59292c25bfcd886ba84b01e155d1cb27b400cc9d1561205fae83b6f9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cray, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:21:04 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:04 np0005533938 podman[95222]: 2025-11-24 18:21:04.47334051 +0000 UTC m=+0.181769061 container start b2f510f59292c25bfcd886ba84b01e155d1cb27b400cc9d1561205fae83b6f9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cray, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:21:04 np0005533938 podman[95222]: 2025-11-24 18:21:04.476748846 +0000 UTC m=+0.185177407 container attach b2f510f59292c25bfcd886ba84b01e155d1cb27b400cc9d1561205fae83b6f9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cray, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 13:21:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:21:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:21:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:21:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:21:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:21:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:21:05 np0005533938 nostalgic_cray[95239]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:21:05 np0005533938 nostalgic_cray[95239]: --> relative data size: 1.0
Nov 24 13:21:05 np0005533938 nostalgic_cray[95239]: --> All data devices are unavailable
Nov 24 13:21:05 np0005533938 systemd[1]: libpod-b2f510f59292c25bfcd886ba84b01e155d1cb27b400cc9d1561205fae83b6f9d.scope: Deactivated successfully.
Nov 24 13:21:05 np0005533938 podman[95222]: 2025-11-24 18:21:05.41252428 +0000 UTC m=+1.120952841 container died b2f510f59292c25bfcd886ba84b01e155d1cb27b400cc9d1561205fae83b6f9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cray, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 13:21:05 np0005533938 systemd[1]: var-lib-containers-storage-overlay-010bf65b13ba035b054e7c05ee3ddeb6f840e5222014277e250b308c7dfa4fd5-merged.mount: Deactivated successfully.
Nov 24 13:21:05 np0005533938 podman[95222]: 2025-11-24 18:21:05.463089487 +0000 UTC m=+1.171518048 container remove b2f510f59292c25bfcd886ba84b01e155d1cb27b400cc9d1561205fae83b6f9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cray, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 13:21:05 np0005533938 systemd[1]: libpod-conmon-b2f510f59292c25bfcd886ba84b01e155d1cb27b400cc9d1561205fae83b6f9d.scope: Deactivated successfully.
Nov 24 13:21:06 np0005533938 podman[95418]: 2025-11-24 18:21:06.024826054 +0000 UTC m=+0.035359724 container create 95afbd2312215a22b626ea8a1b89c9b12a81fd9fba4b01697a8317ef51149d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 13:21:06 np0005533938 systemd[1]: Started libpod-conmon-95afbd2312215a22b626ea8a1b89c9b12a81fd9fba4b01697a8317ef51149d77.scope.
Nov 24 13:21:06 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:06 np0005533938 podman[95418]: 2025-11-24 18:21:06.097306235 +0000 UTC m=+0.107839915 container init 95afbd2312215a22b626ea8a1b89c9b12a81fd9fba4b01697a8317ef51149d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 24 13:21:06 np0005533938 podman[95418]: 2025-11-24 18:21:06.104082076 +0000 UTC m=+0.114615736 container start 95afbd2312215a22b626ea8a1b89c9b12a81fd9fba4b01697a8317ef51149d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mclaren, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 13:21:06 np0005533938 podman[95418]: 2025-11-24 18:21:06.009982729 +0000 UTC m=+0.020516389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:06 np0005533938 podman[95418]: 2025-11-24 18:21:06.107913683 +0000 UTC m=+0.118447363 container attach 95afbd2312215a22b626ea8a1b89c9b12a81fd9fba4b01697a8317ef51149d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:21:06 np0005533938 hopeful_mclaren[95434]: 167 167
Nov 24 13:21:06 np0005533938 systemd[1]: libpod-95afbd2312215a22b626ea8a1b89c9b12a81fd9fba4b01697a8317ef51149d77.scope: Deactivated successfully.
Nov 24 13:21:06 np0005533938 podman[95418]: 2025-11-24 18:21:06.110471237 +0000 UTC m=+0.121004897 container died 95afbd2312215a22b626ea8a1b89c9b12a81fd9fba4b01697a8317ef51149d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mclaren, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 13:21:06 np0005533938 systemd[1]: var-lib-containers-storage-overlay-16c4d55702ed76b2725fe157231d9dac0b6a2f1a170bda61a5552a35e8919c33-merged.mount: Deactivated successfully.
Nov 24 13:21:06 np0005533938 podman[95418]: 2025-11-24 18:21:06.158010458 +0000 UTC m=+0.168544128 container remove 95afbd2312215a22b626ea8a1b89c9b12a81fd9fba4b01697a8317ef51149d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mclaren, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 13:21:06 np0005533938 systemd[1]: libpod-conmon-95afbd2312215a22b626ea8a1b89c9b12a81fd9fba4b01697a8317ef51149d77.scope: Deactivated successfully.
Nov 24 13:21:06 np0005533938 podman[95457]: 2025-11-24 18:21:06.358573793 +0000 UTC m=+0.067954887 container create b55b6eefe8624fa420cd44f75cafc70535e6f7688df67472a5989558de552ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hellman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 24 13:21:06 np0005533938 systemd[1]: Started libpod-conmon-b55b6eefe8624fa420cd44f75cafc70535e6f7688df67472a5989558de552ff2.scope.
Nov 24 13:21:06 np0005533938 podman[95457]: 2025-11-24 18:21:06.332646848 +0000 UTC m=+0.042028032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:06 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:06 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0ef08dbded87291a0eda8cda47789cd5c51eb265042e97f8dfbbcc1f28c18c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:06 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0ef08dbded87291a0eda8cda47789cd5c51eb265042e97f8dfbbcc1f28c18c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:06 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0ef08dbded87291a0eda8cda47789cd5c51eb265042e97f8dfbbcc1f28c18c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:06 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0ef08dbded87291a0eda8cda47789cd5c51eb265042e97f8dfbbcc1f28c18c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:06 np0005533938 podman[95457]: 2025-11-24 18:21:06.446871733 +0000 UTC m=+0.156252867 container init b55b6eefe8624fa420cd44f75cafc70535e6f7688df67472a5989558de552ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hellman, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:21:06 np0005533938 podman[95457]: 2025-11-24 18:21:06.458049836 +0000 UTC m=+0.167430980 container start b55b6eefe8624fa420cd44f75cafc70535e6f7688df67472a5989558de552ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 13:21:06 np0005533938 podman[95457]: 2025-11-24 18:21:06.461734899 +0000 UTC m=+0.171116043 container attach b55b6eefe8624fa420cd44f75cafc70535e6f7688df67472a5989558de552ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hellman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 13:21:06 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]: {
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:    "0": [
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:        {
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "devices": [
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "/dev/loop3"
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            ],
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "lv_name": "ceph_lv0",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "lv_size": "21470642176",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "name": "ceph_lv0",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "tags": {
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.cluster_name": "ceph",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.crush_device_class": "",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.encrypted": "0",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.osd_id": "0",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.type": "block",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.vdo": "0"
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            },
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "type": "block",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "vg_name": "ceph_vg0"
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:        }
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:    ],
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:    "1": [
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:        {
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "devices": [
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "/dev/loop4"
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            ],
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "lv_name": "ceph_lv1",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "lv_size": "21470642176",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "name": "ceph_lv1",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "tags": {
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.cluster_name": "ceph",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.crush_device_class": "",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.encrypted": "0",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.osd_id": "1",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.type": "block",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.vdo": "0"
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            },
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "type": "block",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "vg_name": "ceph_vg1"
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:        }
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:    ],
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:    "2": [
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:        {
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "devices": [
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "/dev/loop5"
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            ],
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "lv_name": "ceph_lv2",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "lv_size": "21470642176",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "name": "ceph_lv2",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "tags": {
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.cluster_name": "ceph",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.crush_device_class": "",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.encrypted": "0",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.osd_id": "2",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.type": "block",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:                "ceph.vdo": "0"
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            },
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "type": "block",
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:            "vg_name": "ceph_vg2"
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:        }
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]:    ]
Nov 24 13:21:07 np0005533938 inspiring_hellman[95473]: }
Nov 24 13:21:07 np0005533938 systemd[1]: libpod-b55b6eefe8624fa420cd44f75cafc70535e6f7688df67472a5989558de552ff2.scope: Deactivated successfully.
Nov 24 13:21:07 np0005533938 podman[95457]: 2025-11-24 18:21:07.193041787 +0000 UTC m=+0.902422931 container died b55b6eefe8624fa420cd44f75cafc70535e6f7688df67472a5989558de552ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:21:07 np0005533938 systemd[1]: var-lib-containers-storage-overlay-8e0ef08dbded87291a0eda8cda47789cd5c51eb265042e97f8dfbbcc1f28c18c-merged.mount: Deactivated successfully.
Nov 24 13:21:07 np0005533938 podman[95457]: 2025-11-24 18:21:07.262306527 +0000 UTC m=+0.971687631 container remove b55b6eefe8624fa420cd44f75cafc70535e6f7688df67472a5989558de552ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hellman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:21:07 np0005533938 systemd[1]: libpod-conmon-b55b6eefe8624fa420cd44f75cafc70535e6f7688df67472a5989558de552ff2.scope: Deactivated successfully.
Nov 24 13:21:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:21:07 np0005533938 podman[95635]: 2025-11-24 18:21:07.935173481 +0000 UTC m=+0.043114250 container create 58d3f36c93b15aeed9502048d7e18af086de7c041c4f8809a45ee55b23330370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 13:21:07 np0005533938 systemd[1]: Started libpod-conmon-58d3f36c93b15aeed9502048d7e18af086de7c041c4f8809a45ee55b23330370.scope.
Nov 24 13:21:08 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:08 np0005533938 podman[95635]: 2025-11-24 18:21:07.918029928 +0000 UTC m=+0.025970727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:08 np0005533938 podman[95635]: 2025-11-24 18:21:08.020185108 +0000 UTC m=+0.128125887 container init 58d3f36c93b15aeed9502048d7e18af086de7c041c4f8809a45ee55b23330370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 13:21:08 np0005533938 podman[95635]: 2025-11-24 18:21:08.031158045 +0000 UTC m=+0.139098814 container start 58d3f36c93b15aeed9502048d7e18af086de7c041c4f8809a45ee55b23330370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_easley, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 13:21:08 np0005533938 flamboyant_easley[95652]: 167 167
Nov 24 13:21:08 np0005533938 systemd[1]: libpod-58d3f36c93b15aeed9502048d7e18af086de7c041c4f8809a45ee55b23330370.scope: Deactivated successfully.
Nov 24 13:21:08 np0005533938 conmon[95652]: conmon 58d3f36c93b15aeed950 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-58d3f36c93b15aeed9502048d7e18af086de7c041c4f8809a45ee55b23330370.scope/container/memory.events
Nov 24 13:21:08 np0005533938 podman[95635]: 2025-11-24 18:21:08.03610303 +0000 UTC m=+0.144043839 container attach 58d3f36c93b15aeed9502048d7e18af086de7c041c4f8809a45ee55b23330370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_easley, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:08 np0005533938 podman[95635]: 2025-11-24 18:21:08.036645093 +0000 UTC m=+0.144585862 container died 58d3f36c93b15aeed9502048d7e18af086de7c041c4f8809a45ee55b23330370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 13:21:08 np0005533938 systemd[1]: var-lib-containers-storage-overlay-6c12203e357fc6c0ed7a7d6f2c7407f0e5f195b61493b9ce3108638177040464-merged.mount: Deactivated successfully.
Nov 24 13:21:08 np0005533938 podman[95635]: 2025-11-24 18:21:08.067594555 +0000 UTC m=+0.175535324 container remove 58d3f36c93b15aeed9502048d7e18af086de7c041c4f8809a45ee55b23330370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:21:08 np0005533938 systemd[1]: libpod-conmon-58d3f36c93b15aeed9502048d7e18af086de7c041c4f8809a45ee55b23330370.scope: Deactivated successfully.
Nov 24 13:21:08 np0005533938 podman[95675]: 2025-11-24 18:21:08.265515374 +0000 UTC m=+0.048051585 container create 9db0e3c50c57a648af76b69fcbfee206a1fda059688499fa63f6233148f5c32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bassi, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:08 np0005533938 systemd[1]: Started libpod-conmon-9db0e3c50c57a648af76b69fcbfee206a1fda059688499fa63f6233148f5c32d.scope.
Nov 24 13:21:08 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:08 np0005533938 podman[95675]: 2025-11-24 18:21:08.245478678 +0000 UTC m=+0.028014889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:08 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c69a0a748856eda226f3f2d162bbf77cbe11588456e04c5665f108851eaec41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:08 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c69a0a748856eda226f3f2d162bbf77cbe11588456e04c5665f108851eaec41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:08 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c69a0a748856eda226f3f2d162bbf77cbe11588456e04c5665f108851eaec41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:08 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c69a0a748856eda226f3f2d162bbf77cbe11588456e04c5665f108851eaec41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:08 np0005533938 podman[95675]: 2025-11-24 18:21:08.363645742 +0000 UTC m=+0.146181933 container init 9db0e3c50c57a648af76b69fcbfee206a1fda059688499fa63f6233148f5c32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 13:21:08 np0005533938 podman[95675]: 2025-11-24 18:21:08.369669404 +0000 UTC m=+0.152205565 container start 9db0e3c50c57a648af76b69fcbfee206a1fda059688499fa63f6233148f5c32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:21:08 np0005533938 podman[95675]: 2025-11-24 18:21:08.373140132 +0000 UTC m=+0.155676303 container attach 9db0e3c50c57a648af76b69fcbfee206a1fda059688499fa63f6233148f5c32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bassi, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 13:21:08 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:09 np0005533938 goofy_bassi[95692]: {
Nov 24 13:21:09 np0005533938 goofy_bassi[95692]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:21:09 np0005533938 goofy_bassi[95692]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:21:09 np0005533938 goofy_bassi[95692]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:21:09 np0005533938 goofy_bassi[95692]:        "osd_id": 0,
Nov 24 13:21:09 np0005533938 goofy_bassi[95692]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:21:09 np0005533938 goofy_bassi[95692]:        "type": "bluestore"
Nov 24 13:21:09 np0005533938 goofy_bassi[95692]:    },
Nov 24 13:21:09 np0005533938 goofy_bassi[95692]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:21:09 np0005533938 goofy_bassi[95692]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:21:09 np0005533938 goofy_bassi[95692]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:21:09 np0005533938 goofy_bassi[95692]:        "osd_id": 1,
Nov 24 13:21:09 np0005533938 goofy_bassi[95692]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:21:09 np0005533938 goofy_bassi[95692]:        "type": "bluestore"
Nov 24 13:21:09 np0005533938 goofy_bassi[95692]:    },
Nov 24 13:21:09 np0005533938 goofy_bassi[95692]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:21:09 np0005533938 goofy_bassi[95692]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:21:09 np0005533938 goofy_bassi[95692]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:21:09 np0005533938 goofy_bassi[95692]:        "osd_id": 2,
Nov 24 13:21:09 np0005533938 goofy_bassi[95692]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:21:09 np0005533938 goofy_bassi[95692]:        "type": "bluestore"
Nov 24 13:21:09 np0005533938 goofy_bassi[95692]:    }
Nov 24 13:21:09 np0005533938 goofy_bassi[95692]: }
Nov 24 13:21:09 np0005533938 systemd[1]: libpod-9db0e3c50c57a648af76b69fcbfee206a1fda059688499fa63f6233148f5c32d.scope: Deactivated successfully.
Nov 24 13:21:09 np0005533938 systemd[1]: libpod-9db0e3c50c57a648af76b69fcbfee206a1fda059688499fa63f6233148f5c32d.scope: Consumed 1.086s CPU time.
Nov 24 13:21:09 np0005533938 podman[95675]: 2025-11-24 18:21:09.450483991 +0000 UTC m=+1.233020202 container died 9db0e3c50c57a648af76b69fcbfee206a1fda059688499fa63f6233148f5c32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bassi, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:09 np0005533938 systemd[1]: var-lib-containers-storage-overlay-2c69a0a748856eda226f3f2d162bbf77cbe11588456e04c5665f108851eaec41-merged.mount: Deactivated successfully.
Nov 24 13:21:09 np0005533938 podman[95675]: 2025-11-24 18:21:09.500555986 +0000 UTC m=+1.283092147 container remove 9db0e3c50c57a648af76b69fcbfee206a1fda059688499fa63f6233148f5c32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bassi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 13:21:09 np0005533938 systemd[1]: libpod-conmon-9db0e3c50c57a648af76b69fcbfee206a1fda059688499fa63f6233148f5c32d.scope: Deactivated successfully.
Nov 24 13:21:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:21:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:21:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:09 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:09 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:10 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:10 np0005533938 python3[95811]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:10 np0005533938 podman[95813]: 2025-11-24 18:21:10.756287579 +0000 UTC m=+0.037680733 container create 29d351739942155f582fcfc897c0f027069c083447060bc9c41d6ceff5c7ec80 (image=quay.io/ceph/ceph:v18, name=thirsty_nightingale, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:10 np0005533938 systemd[1]: Started libpod-conmon-29d351739942155f582fcfc897c0f027069c083447060bc9c41d6ceff5c7ec80.scope.
Nov 24 13:21:10 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:10 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba66205735afddb23c8f355b0db5e251770ded4b29a3dfe87d1754783343b5f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:10 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba66205735afddb23c8f355b0db5e251770ded4b29a3dfe87d1754783343b5f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:10 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba66205735afddb23c8f355b0db5e251770ded4b29a3dfe87d1754783343b5f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:10 np0005533938 podman[95813]: 2025-11-24 18:21:10.832204567 +0000 UTC m=+0.113597731 container init 29d351739942155f582fcfc897c0f027069c083447060bc9c41d6ceff5c7ec80 (image=quay.io/ceph/ceph:v18, name=thirsty_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:10 np0005533938 podman[95813]: 2025-11-24 18:21:10.739817733 +0000 UTC m=+0.021210917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:10 np0005533938 podman[95813]: 2025-11-24 18:21:10.839120651 +0000 UTC m=+0.120513805 container start 29d351739942155f582fcfc897c0f027069c083447060bc9c41d6ceff5c7ec80 (image=quay.io/ceph/ceph:v18, name=thirsty_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:10 np0005533938 podman[95813]: 2025-11-24 18:21:10.841710317 +0000 UTC m=+0.123103531 container attach 29d351739942155f582fcfc897c0f027069c083447060bc9c41d6ceff5c7ec80 (image=quay.io/ceph/ceph:v18, name=thirsty_nightingale, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:21:11 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 24 13:21:11 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2469430221' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 13:21:11 np0005533938 thirsty_nightingale[95829]: 
Nov 24 13:21:11 np0005533938 thirsty_nightingale[95829]: {"fsid":"e5ee928f-099b-569b-93c9-ecf025cbb50d","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":144,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":20,"num_osds":3,"num_up_osds":3,"osd_up_since":1764008452,"num_in_osds":3,"osd_in_since":1764008421,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":83410944,"bytes_avail":64328515584,"bytes_total":64411926528},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-24T18:20:36.466398+0000","services":{}},"progress_events":{}}
Nov 24 13:21:11 np0005533938 systemd[1]: libpod-29d351739942155f582fcfc897c0f027069c083447060bc9c41d6ceff5c7ec80.scope: Deactivated successfully.
Nov 24 13:21:11 np0005533938 podman[95813]: 2025-11-24 18:21:11.46783535 +0000 UTC m=+0.749228554 container died 29d351739942155f582fcfc897c0f027069c083447060bc9c41d6ceff5c7ec80 (image=quay.io/ceph/ceph:v18, name=thirsty_nightingale, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 13:21:11 np0005533938 systemd[1]: var-lib-containers-storage-overlay-eba66205735afddb23c8f355b0db5e251770ded4b29a3dfe87d1754783343b5f-merged.mount: Deactivated successfully.
Nov 24 13:21:11 np0005533938 podman[95813]: 2025-11-24 18:21:11.514924059 +0000 UTC m=+0.796317203 container remove 29d351739942155f582fcfc897c0f027069c083447060bc9c41d6ceff5c7ec80 (image=quay.io/ceph/ceph:v18, name=thirsty_nightingale, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 13:21:11 np0005533938 systemd[1]: libpod-conmon-29d351739942155f582fcfc897c0f027069c083447060bc9c41d6ceff5c7ec80.scope: Deactivated successfully.
Nov 24 13:21:11 np0005533938 python3[95893]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:12 np0005533938 podman[95894]: 2025-11-24 18:21:12.060450767 +0000 UTC m=+0.044169097 container create eeba2335ba5a93b344444be81733a7ffa4e0f1b3d5a451a27aedd5f8d2d7d476 (image=quay.io/ceph/ceph:v18, name=amazing_brown, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:21:12 np0005533938 systemd[1]: Started libpod-conmon-eeba2335ba5a93b344444be81733a7ffa4e0f1b3d5a451a27aedd5f8d2d7d476.scope.
Nov 24 13:21:12 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:12 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deb3cd6c062618adcafbc1f242b7c58e38107d94846553368b8fa24b2a904e96/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:12 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deb3cd6c062618adcafbc1f242b7c58e38107d94846553368b8fa24b2a904e96/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:12 np0005533938 podman[95894]: 2025-11-24 18:21:12.123346875 +0000 UTC m=+0.107065195 container init eeba2335ba5a93b344444be81733a7ffa4e0f1b3d5a451a27aedd5f8d2d7d476 (image=quay.io/ceph/ceph:v18, name=amazing_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 13:21:12 np0005533938 podman[95894]: 2025-11-24 18:21:12.130933077 +0000 UTC m=+0.114651397 container start eeba2335ba5a93b344444be81733a7ffa4e0f1b3d5a451a27aedd5f8d2d7d476 (image=quay.io/ceph/ceph:v18, name=amazing_brown, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 13:21:12 np0005533938 podman[95894]: 2025-11-24 18:21:12.133977964 +0000 UTC m=+0.117696284 container attach eeba2335ba5a93b344444be81733a7ffa4e0f1b3d5a451a27aedd5f8d2d7d476 (image=quay.io/ceph/ceph:v18, name=amazing_brown, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:12 np0005533938 podman[95894]: 2025-11-24 18:21:12.040574755 +0000 UTC m=+0.024293145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:21:12 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 24 13:21:12 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1996595322' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 13:21:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 24 13:21:12 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/1996595322' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 13:21:13 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1996595322' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 13:21:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Nov 24 13:21:13 np0005533938 amazing_brown[95909]: pool 'vms' created
Nov 24 13:21:13 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Nov 24 13:21:13 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 21 pg[2.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [2] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:13 np0005533938 systemd[1]: libpod-eeba2335ba5a93b344444be81733a7ffa4e0f1b3d5a451a27aedd5f8d2d7d476.scope: Deactivated successfully.
Nov 24 13:21:13 np0005533938 podman[95894]: 2025-11-24 18:21:13.030453265 +0000 UTC m=+1.014171605 container died eeba2335ba5a93b344444be81733a7ffa4e0f1b3d5a451a27aedd5f8d2d7d476 (image=quay.io/ceph/ceph:v18, name=amazing_brown, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:13 np0005533938 systemd[1]: var-lib-containers-storage-overlay-deb3cd6c062618adcafbc1f242b7c58e38107d94846553368b8fa24b2a904e96-merged.mount: Deactivated successfully.
Nov 24 13:21:13 np0005533938 podman[95894]: 2025-11-24 18:21:13.07497296 +0000 UTC m=+1.058691280 container remove eeba2335ba5a93b344444be81733a7ffa4e0f1b3d5a451a27aedd5f8d2d7d476 (image=quay.io/ceph/ceph:v18, name=amazing_brown, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 13:21:13 np0005533938 systemd[1]: libpod-conmon-eeba2335ba5a93b344444be81733a7ffa4e0f1b3d5a451a27aedd5f8d2d7d476.scope: Deactivated successfully.
Nov 24 13:21:13 np0005533938 python3[95973]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:13 np0005533938 podman[95974]: 2025-11-24 18:21:13.430349495 +0000 UTC m=+0.036085582 container create 897d79ec286f196995abba40f40692c83d6011803e01b10edb67b2935455c5bc (image=quay.io/ceph/ceph:v18, name=jolly_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:21:13 np0005533938 systemd[1]: Started libpod-conmon-897d79ec286f196995abba40f40692c83d6011803e01b10edb67b2935455c5bc.scope.
Nov 24 13:21:13 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:13 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1dd6dfd42e1ac56a670eb6b1c45a42aa28429953edd5f9faa82590023960c37/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:13 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1dd6dfd42e1ac56a670eb6b1c45a42aa28429953edd5f9faa82590023960c37/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:13 np0005533938 podman[95974]: 2025-11-24 18:21:13.496751982 +0000 UTC m=+0.102488069 container init 897d79ec286f196995abba40f40692c83d6011803e01b10edb67b2935455c5bc (image=quay.io/ceph/ceph:v18, name=jolly_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 24 13:21:13 np0005533938 podman[95974]: 2025-11-24 18:21:13.50183202 +0000 UTC m=+0.107568117 container start 897d79ec286f196995abba40f40692c83d6011803e01b10edb67b2935455c5bc (image=quay.io/ceph/ceph:v18, name=jolly_meninsky, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:21:13 np0005533938 podman[95974]: 2025-11-24 18:21:13.505085713 +0000 UTC m=+0.110821820 container attach 897d79ec286f196995abba40f40692c83d6011803e01b10edb67b2935455c5bc (image=quay.io/ceph/ceph:v18, name=jolly_meninsky, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 13:21:13 np0005533938 podman[95974]: 2025-11-24 18:21:13.414724981 +0000 UTC m=+0.020461098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:14 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 24 13:21:14 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Nov 24 13:21:14 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Nov 24 13:21:14 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 24 13:21:14 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3178470247' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 13:21:14 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 22 pg[2.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [2] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:14 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/1996595322' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 13:21:14 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v63: 2 pgs: 1 active+clean, 1 creating+peering; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:15 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 24 13:21:15 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3178470247' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 13:21:15 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Nov 24 13:21:15 np0005533938 jolly_meninsky[95990]: pool 'volumes' created
Nov 24 13:21:15 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Nov 24 13:21:15 np0005533938 ceph-mon[74927]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 13:21:15 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/3178470247' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 13:21:15 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/3178470247' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 13:21:15 np0005533938 systemd[1]: libpod-897d79ec286f196995abba40f40692c83d6011803e01b10edb67b2935455c5bc.scope: Deactivated successfully.
Nov 24 13:21:15 np0005533938 conmon[95990]: conmon 897d79ec286f196995ab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-897d79ec286f196995abba40f40692c83d6011803e01b10edb67b2935455c5bc.scope/container/memory.events
Nov 24 13:21:15 np0005533938 podman[95974]: 2025-11-24 18:21:15.035008451 +0000 UTC m=+1.640744598 container died 897d79ec286f196995abba40f40692c83d6011803e01b10edb67b2935455c5bc (image=quay.io/ceph/ceph:v18, name=jolly_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 13:21:15 np0005533938 systemd[1]: var-lib-containers-storage-overlay-c1dd6dfd42e1ac56a670eb6b1c45a42aa28429953edd5f9faa82590023960c37-merged.mount: Deactivated successfully.
Nov 24 13:21:15 np0005533938 podman[95974]: 2025-11-24 18:21:15.078718105 +0000 UTC m=+1.684454192 container remove 897d79ec286f196995abba40f40692c83d6011803e01b10edb67b2935455c5bc (image=quay.io/ceph/ceph:v18, name=jolly_meninsky, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 13:21:15 np0005533938 systemd[1]: libpod-conmon-897d79ec286f196995abba40f40692c83d6011803e01b10edb67b2935455c5bc.scope: Deactivated successfully.
Nov 24 13:21:15 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 23 pg[3.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [1] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:15 np0005533938 python3[96052]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:15 np0005533938 podman[96053]: 2025-11-24 18:21:15.433368542 +0000 UTC m=+0.058256103 container create 5404e0ad74e3ab048bdbbf76ace392498b5f4e2aadea7fe1185b4fdb5a63362f (image=quay.io/ceph/ceph:v18, name=mystifying_chaplygin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:21:15 np0005533938 systemd[1]: Started libpod-conmon-5404e0ad74e3ab048bdbbf76ace392498b5f4e2aadea7fe1185b4fdb5a63362f.scope.
Nov 24 13:21:15 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:15 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bed6ff73d5d86668ebbe880456fb93ded9b6f4e6279f0bbcfe64542ceba51a9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:15 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bed6ff73d5d86668ebbe880456fb93ded9b6f4e6279f0bbcfe64542ceba51a9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:15 np0005533938 podman[96053]: 2025-11-24 18:21:15.415651754 +0000 UTC m=+0.040539285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:15 np0005533938 podman[96053]: 2025-11-24 18:21:15.514655075 +0000 UTC m=+0.139542636 container init 5404e0ad74e3ab048bdbbf76ace392498b5f4e2aadea7fe1185b4fdb5a63362f (image=quay.io/ceph/ceph:v18, name=mystifying_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 13:21:15 np0005533938 podman[96053]: 2025-11-24 18:21:15.524568855 +0000 UTC m=+0.149456416 container start 5404e0ad74e3ab048bdbbf76ace392498b5f4e2aadea7fe1185b4fdb5a63362f (image=quay.io/ceph/ceph:v18, name=mystifying_chaplygin, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:15 np0005533938 podman[96053]: 2025-11-24 18:21:15.528083514 +0000 UTC m=+0.152971065 container attach 5404e0ad74e3ab048bdbbf76ace392498b5f4e2aadea7fe1185b4fdb5a63362f (image=quay.io/ceph/ceph:v18, name=mystifying_chaplygin, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 13:21:16 np0005533938 ceph-mon[74927]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 13:21:16 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 24 13:21:16 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2031348729' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 13:21:16 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 24 13:21:16 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2031348729' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 13:21:16 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Nov 24 13:21:16 np0005533938 mystifying_chaplygin[96068]: pool 'backups' created
Nov 24 13:21:16 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Nov 24 13:21:16 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 24 pg[3.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [1] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:16 np0005533938 systemd[1]: libpod-5404e0ad74e3ab048bdbbf76ace392498b5f4e2aadea7fe1185b4fdb5a63362f.scope: Deactivated successfully.
Nov 24 13:21:16 np0005533938 podman[96053]: 2025-11-24 18:21:16.074640298 +0000 UTC m=+0.699527819 container died 5404e0ad74e3ab048bdbbf76ace392498b5f4e2aadea7fe1185b4fdb5a63362f (image=quay.io/ceph/ceph:v18, name=mystifying_chaplygin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 13:21:16 np0005533938 systemd[1]: var-lib-containers-storage-overlay-5bed6ff73d5d86668ebbe880456fb93ded9b6f4e6279f0bbcfe64542ceba51a9-merged.mount: Deactivated successfully.
Nov 24 13:21:16 np0005533938 podman[96053]: 2025-11-24 18:21:16.113009187 +0000 UTC m=+0.737896708 container remove 5404e0ad74e3ab048bdbbf76ace392498b5f4e2aadea7fe1185b4fdb5a63362f (image=quay.io/ceph/ceph:v18, name=mystifying_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:21:16 np0005533938 systemd[1]: libpod-conmon-5404e0ad74e3ab048bdbbf76ace392498b5f4e2aadea7fe1185b4fdb5a63362f.scope: Deactivated successfully.
Nov 24 13:21:16 np0005533938 python3[96135]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:16 np0005533938 podman[96136]: 2025-11-24 18:21:16.427624272 +0000 UTC m=+0.044225287 container create 4d2cc798afd5c6f2e7e684db0057b46717616eed2d632d7e77106b326f2fdc94 (image=quay.io/ceph/ceph:v18, name=tender_sammet, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:16 np0005533938 systemd[1]: Started libpod-conmon-4d2cc798afd5c6f2e7e684db0057b46717616eed2d632d7e77106b326f2fdc94.scope.
Nov 24 13:21:16 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v66: 4 pgs: 2 unknown, 1 active+clean, 1 creating+peering; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:16 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:16 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ff59deefb2837c82b0083011b58cf74d13892df35d706ef5aac76dcb7d5c69/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:16 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ff59deefb2837c82b0083011b58cf74d13892df35d706ef5aac76dcb7d5c69/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:16 np0005533938 podman[96136]: 2025-11-24 18:21:16.409753141 +0000 UTC m=+0.026354186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:16 np0005533938 podman[96136]: 2025-11-24 18:21:16.50792227 +0000 UTC m=+0.124523315 container init 4d2cc798afd5c6f2e7e684db0057b46717616eed2d632d7e77106b326f2fdc94 (image=quay.io/ceph/ceph:v18, name=tender_sammet, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:21:16 np0005533938 podman[96136]: 2025-11-24 18:21:16.517984395 +0000 UTC m=+0.134585410 container start 4d2cc798afd5c6f2e7e684db0057b46717616eed2d632d7e77106b326f2fdc94 (image=quay.io/ceph/ceph:v18, name=tender_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 13:21:16 np0005533938 podman[96136]: 2025-11-24 18:21:16.522215491 +0000 UTC m=+0.138816536 container attach 4d2cc798afd5c6f2e7e684db0057b46717616eed2d632d7e77106b326f2fdc94 (image=quay.io/ceph/ceph:v18, name=tender_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 13:21:16 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 24 pg[4.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [0] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:17 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2031348729' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 13:21:17 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2031348729' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 13:21:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 24 13:21:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 24 13:21:17 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2454476534' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 13:21:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Nov 24 13:21:17 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Nov 24 13:21:17 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 25 pg[4.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [0] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:21:18 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2454476534' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 13:21:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 24 13:21:18 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2454476534' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 13:21:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Nov 24 13:21:18 np0005533938 tender_sammet[96151]: pool 'images' created
Nov 24 13:21:18 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Nov 24 13:21:18 np0005533938 systemd[1]: libpod-4d2cc798afd5c6f2e7e684db0057b46717616eed2d632d7e77106b326f2fdc94.scope: Deactivated successfully.
Nov 24 13:21:18 np0005533938 podman[96136]: 2025-11-24 18:21:18.092211513 +0000 UTC m=+1.708812518 container died 4d2cc798afd5c6f2e7e684db0057b46717616eed2d632d7e77106b326f2fdc94 (image=quay.io/ceph/ceph:v18, name=tender_sammet, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 13:21:18 np0005533938 systemd[1]: var-lib-containers-storage-overlay-b1ff59deefb2837c82b0083011b58cf74d13892df35d706ef5aac76dcb7d5c69-merged.mount: Deactivated successfully.
Nov 24 13:21:18 np0005533938 podman[96136]: 2025-11-24 18:21:18.134628514 +0000 UTC m=+1.751229529 container remove 4d2cc798afd5c6f2e7e684db0057b46717616eed2d632d7e77106b326f2fdc94 (image=quay.io/ceph/ceph:v18, name=tender_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 13:21:18 np0005533938 systemd[1]: libpod-conmon-4d2cc798afd5c6f2e7e684db0057b46717616eed2d632d7e77106b326f2fdc94.scope: Deactivated successfully.
Nov 24 13:21:18 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 26 pg[5.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [2] r=0 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:18 np0005533938 python3[96216]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:18 np0005533938 podman[96217]: 2025-11-24 18:21:18.469267735 +0000 UTC m=+0.039945829 container create 5dd8776412cfd3aedfa6662a2712596eb94b0da10e74d903118f587ee63feb5f (image=quay.io/ceph/ceph:v18, name=heuristic_ellis, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:18 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v69: 5 pgs: 3 unknown, 2 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:18 np0005533938 systemd[1]: Started libpod-conmon-5dd8776412cfd3aedfa6662a2712596eb94b0da10e74d903118f587ee63feb5f.scope.
Nov 24 13:21:18 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:18 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48c848a5f7431f4241c98af994dcc950b88cce41eb764709556be9e39bf5c174/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:18 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48c848a5f7431f4241c98af994dcc950b88cce41eb764709556be9e39bf5c174/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:18 np0005533938 podman[96217]: 2025-11-24 18:21:18.536592936 +0000 UTC m=+0.107271040 container init 5dd8776412cfd3aedfa6662a2712596eb94b0da10e74d903118f587ee63feb5f (image=quay.io/ceph/ceph:v18, name=heuristic_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:21:18 np0005533938 podman[96217]: 2025-11-24 18:21:18.541785037 +0000 UTC m=+0.112463131 container start 5dd8776412cfd3aedfa6662a2712596eb94b0da10e74d903118f587ee63feb5f (image=quay.io/ceph/ceph:v18, name=heuristic_ellis, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:18 np0005533938 podman[96217]: 2025-11-24 18:21:18.545119031 +0000 UTC m=+0.115797165 container attach 5dd8776412cfd3aedfa6662a2712596eb94b0da10e74d903118f587ee63feb5f (image=quay.io/ceph/ceph:v18, name=heuristic_ellis, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:18 np0005533938 podman[96217]: 2025-11-24 18:21:18.454618465 +0000 UTC m=+0.025296579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 24 13:21:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2751501735' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 13:21:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 24 13:21:19 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2454476534' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 13:21:19 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2751501735' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 13:21:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2751501735' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 13:21:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Nov 24 13:21:19 np0005533938 heuristic_ellis[96232]: pool 'cephfs.cephfs.meta' created
Nov 24 13:21:19 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Nov 24 13:21:19 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 27 pg[6.0( empty local-lis/les=0/0 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [0] r=0 lpr=27 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:19 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 27 pg[5.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [2] r=0 lpr=26 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:19 np0005533938 systemd[1]: libpod-5dd8776412cfd3aedfa6662a2712596eb94b0da10e74d903118f587ee63feb5f.scope: Deactivated successfully.
Nov 24 13:21:19 np0005533938 podman[96217]: 2025-11-24 18:21:19.101404631 +0000 UTC m=+0.672082725 container died 5dd8776412cfd3aedfa6662a2712596eb94b0da10e74d903118f587ee63feb5f (image=quay.io/ceph/ceph:v18, name=heuristic_ellis, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 24 13:21:19 np0005533938 systemd[1]: var-lib-containers-storage-overlay-48c848a5f7431f4241c98af994dcc950b88cce41eb764709556be9e39bf5c174-merged.mount: Deactivated successfully.
Nov 24 13:21:19 np0005533938 podman[96217]: 2025-11-24 18:21:19.14929533 +0000 UTC m=+0.719973424 container remove 5dd8776412cfd3aedfa6662a2712596eb94b0da10e74d903118f587ee63feb5f (image=quay.io/ceph/ceph:v18, name=heuristic_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 13:21:19 np0005533938 systemd[1]: libpod-conmon-5dd8776412cfd3aedfa6662a2712596eb94b0da10e74d903118f587ee63feb5f.scope: Deactivated successfully.
Nov 24 13:21:19 np0005533938 python3[96295]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:19 np0005533938 podman[96296]: 2025-11-24 18:21:19.431740534 +0000 UTC m=+0.037274933 container create 3ff12a3f1a0ae8ce3d2972993aae34a15d10cbf8e9e212c984248253fdba7301 (image=quay.io/ceph/ceph:v18, name=epic_chaum, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:19 np0005533938 systemd[1]: Started libpod-conmon-3ff12a3f1a0ae8ce3d2972993aae34a15d10cbf8e9e212c984248253fdba7301.scope.
Nov 24 13:21:19 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:19 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62e44cc07a6c727f781e46413f86c3c22427b73494ab4d5c46ec4951356fd8c9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:19 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62e44cc07a6c727f781e46413f86c3c22427b73494ab4d5c46ec4951356fd8c9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:19 np0005533938 podman[96296]: 2025-11-24 18:21:19.492993221 +0000 UTC m=+0.098527630 container init 3ff12a3f1a0ae8ce3d2972993aae34a15d10cbf8e9e212c984248253fdba7301 (image=quay.io/ceph/ceph:v18, name=epic_chaum, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:19 np0005533938 podman[96296]: 2025-11-24 18:21:19.498466859 +0000 UTC m=+0.104001278 container start 3ff12a3f1a0ae8ce3d2972993aae34a15d10cbf8e9e212c984248253fdba7301 (image=quay.io/ceph/ceph:v18, name=epic_chaum, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:21:19 np0005533938 podman[96296]: 2025-11-24 18:21:19.501437564 +0000 UTC m=+0.106971973 container attach 3ff12a3f1a0ae8ce3d2972993aae34a15d10cbf8e9e212c984248253fdba7301 (image=quay.io/ceph/ceph:v18, name=epic_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:21:19 np0005533938 podman[96296]: 2025-11-24 18:21:19.41378102 +0000 UTC m=+0.019315449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:20 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 24 13:21:20 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2554744173' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 13:21:20 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 24 13:21:20 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2751501735' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 13:21:20 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2554744173' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 24 13:21:20 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2554744173' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 13:21:20 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Nov 24 13:21:20 np0005533938 epic_chaum[96312]: pool 'cephfs.cephfs.data' created
Nov 24 13:21:20 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Nov 24 13:21:20 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 28 pg[6.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [0] r=0 lpr=27 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:20 np0005533938 systemd[1]: libpod-3ff12a3f1a0ae8ce3d2972993aae34a15d10cbf8e9e212c984248253fdba7301.scope: Deactivated successfully.
Nov 24 13:21:20 np0005533938 podman[96296]: 2025-11-24 18:21:20.118286323 +0000 UTC m=+0.723820722 container died 3ff12a3f1a0ae8ce3d2972993aae34a15d10cbf8e9e212c984248253fdba7301 (image=quay.io/ceph/ceph:v18, name=epic_chaum, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 13:21:20 np0005533938 systemd[1]: var-lib-containers-storage-overlay-62e44cc07a6c727f781e46413f86c3c22427b73494ab4d5c46ec4951356fd8c9-merged.mount: Deactivated successfully.
Nov 24 13:21:20 np0005533938 podman[96296]: 2025-11-24 18:21:20.15223112 +0000 UTC m=+0.757765519 container remove 3ff12a3f1a0ae8ce3d2972993aae34a15d10cbf8e9e212c984248253fdba7301 (image=quay.io/ceph/ceph:v18, name=epic_chaum, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 13:21:20 np0005533938 systemd[1]: libpod-conmon-3ff12a3f1a0ae8ce3d2972993aae34a15d10cbf8e9e212c984248253fdba7301.scope: Deactivated successfully.
Nov 24 13:21:20 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 28 pg[7.0( empty local-lis/les=0/0 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:20 np0005533938 python3[96376]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:20 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 3 unknown, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:20 np0005533938 podman[96377]: 2025-11-24 18:21:20.546750874 +0000 UTC m=+0.049065800 container create 404ed7573b96ef0ffcd5926e800208bca3b0c864e9bdd2a2094ed790378b6650 (image=quay.io/ceph/ceph:v18, name=infallible_lewin, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:21:20 np0005533938 systemd[1]: Started libpod-conmon-404ed7573b96ef0ffcd5926e800208bca3b0c864e9bdd2a2094ed790378b6650.scope.
Nov 24 13:21:20 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:20 np0005533938 podman[96377]: 2025-11-24 18:21:20.52363647 +0000 UTC m=+0.025951476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:20 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bae5647939d50c0a37a86f96822a0b918ed4be01d34d1901fcd2c9e2a5410a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:20 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bae5647939d50c0a37a86f96822a0b918ed4be01d34d1901fcd2c9e2a5410a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:20 np0005533938 podman[96377]: 2025-11-24 18:21:20.637688371 +0000 UTC m=+0.140003307 container init 404ed7573b96ef0ffcd5926e800208bca3b0c864e9bdd2a2094ed790378b6650 (image=quay.io/ceph/ceph:v18, name=infallible_lewin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:20 np0005533938 podman[96377]: 2025-11-24 18:21:20.642918173 +0000 UTC m=+0.145233119 container start 404ed7573b96ef0ffcd5926e800208bca3b0c864e9bdd2a2094ed790378b6650 (image=quay.io/ceph/ceph:v18, name=infallible_lewin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 13:21:20 np0005533938 podman[96377]: 2025-11-24 18:21:20.646195556 +0000 UTC m=+0.148510502 container attach 404ed7573b96ef0ffcd5926e800208bca3b0c864e9bdd2a2094ed790378b6650 (image=quay.io/ceph/ceph:v18, name=infallible_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:21 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 24 13:21:21 np0005533938 ceph-mon[74927]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 13:21:21 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Nov 24 13:21:21 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Nov 24 13:21:21 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2554744173' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 24 13:21:21 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 29 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:21 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Nov 24 13:21:21 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3201017558' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 24 13:21:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 24 13:21:22 np0005533938 ceph-mon[74927]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 13:21:22 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/3201017558' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 24 13:21:22 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3201017558' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 24 13:21:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Nov 24 13:21:22 np0005533938 infallible_lewin[96392]: enabled application 'rbd' on pool 'vms'
Nov 24 13:21:22 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Nov 24 13:21:22 np0005533938 systemd[1]: libpod-404ed7573b96ef0ffcd5926e800208bca3b0c864e9bdd2a2094ed790378b6650.scope: Deactivated successfully.
Nov 24 13:21:22 np0005533938 podman[96377]: 2025-11-24 18:21:22.152695283 +0000 UTC m=+1.655010249 container died 404ed7573b96ef0ffcd5926e800208bca3b0c864e9bdd2a2094ed790378b6650 (image=quay.io/ceph/ceph:v18, name=infallible_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:22 np0005533938 systemd[1]: var-lib-containers-storage-overlay-8bae5647939d50c0a37a86f96822a0b918ed4be01d34d1901fcd2c9e2a5410a2-merged.mount: Deactivated successfully.
Nov 24 13:21:22 np0005533938 podman[96377]: 2025-11-24 18:21:22.200840288 +0000 UTC m=+1.703155214 container remove 404ed7573b96ef0ffcd5926e800208bca3b0c864e9bdd2a2094ed790378b6650 (image=quay.io/ceph/ceph:v18, name=infallible_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:22 np0005533938 systemd[1]: libpod-conmon-404ed7573b96ef0ffcd5926e800208bca3b0c864e9bdd2a2094ed790378b6650.scope: Deactivated successfully.
Nov 24 13:21:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:21:22 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 3 unknown, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:22 np0005533938 python3[96456]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:22 np0005533938 podman[96457]: 2025-11-24 18:21:22.60555585 +0000 UTC m=+0.076239917 container create 3debe000492041d9f8fe397e8fe3f437a0474937a992e06e1c78b51378a869a9 (image=quay.io/ceph/ceph:v18, name=awesome_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 13:21:22 np0005533938 podman[96457]: 2025-11-24 18:21:22.552047908 +0000 UTC m=+0.022731985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:22 np0005533938 systemd[1]: Started libpod-conmon-3debe000492041d9f8fe397e8fe3f437a0474937a992e06e1c78b51378a869a9.scope.
Nov 24 13:21:22 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:22 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc0b0b31940ae919bf7cbf65f5941e23c7062b78a0f6f0ce8fe86e8a0ad0796/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:22 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc0b0b31940ae919bf7cbf65f5941e23c7062b78a0f6f0ce8fe86e8a0ad0796/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:22 np0005533938 podman[96457]: 2025-11-24 18:21:22.706240683 +0000 UTC m=+0.176925100 container init 3debe000492041d9f8fe397e8fe3f437a0474937a992e06e1c78b51378a869a9 (image=quay.io/ceph/ceph:v18, name=awesome_antonelli, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 13:21:22 np0005533938 podman[96457]: 2025-11-24 18:21:22.710953962 +0000 UTC m=+0.181638019 container start 3debe000492041d9f8fe397e8fe3f437a0474937a992e06e1c78b51378a869a9 (image=quay.io/ceph/ceph:v18, name=awesome_antonelli, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:22 np0005533938 podman[96457]: 2025-11-24 18:21:22.713873796 +0000 UTC m=+0.184557903 container attach 3debe000492041d9f8fe397e8fe3f437a0474937a992e06e1c78b51378a869a9 (image=quay.io/ceph/ceph:v18, name=awesome_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 24 13:21:23 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/3201017558' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 24 13:21:23 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Nov 24 13:21:23 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1423363930' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 24 13:21:24 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 24 13:21:24 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/1423363930' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 24 13:21:24 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1423363930' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 24 13:21:24 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Nov 24 13:21:24 np0005533938 awesome_antonelli[96472]: enabled application 'rbd' on pool 'volumes'
Nov 24 13:21:24 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Nov 24 13:21:24 np0005533938 systemd[1]: libpod-3debe000492041d9f8fe397e8fe3f437a0474937a992e06e1c78b51378a869a9.scope: Deactivated successfully.
Nov 24 13:21:24 np0005533938 podman[96457]: 2025-11-24 18:21:24.187334089 +0000 UTC m=+1.658018146 container died 3debe000492041d9f8fe397e8fe3f437a0474937a992e06e1c78b51378a869a9 (image=quay.io/ceph/ceph:v18, name=awesome_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 13:21:24 np0005533938 systemd[1]: var-lib-containers-storage-overlay-2bc0b0b31940ae919bf7cbf65f5941e23c7062b78a0f6f0ce8fe86e8a0ad0796-merged.mount: Deactivated successfully.
Nov 24 13:21:24 np0005533938 podman[96457]: 2025-11-24 18:21:24.226002836 +0000 UTC m=+1.696686893 container remove 3debe000492041d9f8fe397e8fe3f437a0474937a992e06e1c78b51378a869a9 (image=quay.io/ceph/ceph:v18, name=awesome_antonelli, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:21:24 np0005533938 systemd[1]: libpod-conmon-3debe000492041d9f8fe397e8fe3f437a0474937a992e06e1c78b51378a869a9.scope: Deactivated successfully.
Nov 24 13:21:24 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:24 np0005533938 python3[96534]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:24 np0005533938 podman[96535]: 2025-11-24 18:21:24.566834513 +0000 UTC m=+0.056950229 container create 956c72f34b3fdef44d837e33293fd4334209f40f71b6a62c038f51b474808037 (image=quay.io/ceph/ceph:v18, name=modest_ritchie, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 13:21:24 np0005533938 systemd[1]: Started libpod-conmon-956c72f34b3fdef44d837e33293fd4334209f40f71b6a62c038f51b474808037.scope.
Nov 24 13:21:24 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:24 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a49a799cd3b65b71d9897b75141c6f618af8421a2e86392fbbd54a371f5e2709/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:24 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a49a799cd3b65b71d9897b75141c6f618af8421a2e86392fbbd54a371f5e2709/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:24 np0005533938 podman[96535]: 2025-11-24 18:21:24.634374318 +0000 UTC m=+0.124490034 container init 956c72f34b3fdef44d837e33293fd4334209f40f71b6a62c038f51b474808037 (image=quay.io/ceph/ceph:v18, name=modest_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 13:21:24 np0005533938 podman[96535]: 2025-11-24 18:21:24.541910944 +0000 UTC m=+0.032026680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:24 np0005533938 podman[96535]: 2025-11-24 18:21:24.646445483 +0000 UTC m=+0.136561209 container start 956c72f34b3fdef44d837e33293fd4334209f40f71b6a62c038f51b474808037 (image=quay.io/ceph/ceph:v18, name=modest_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:21:24 np0005533938 podman[96535]: 2025-11-24 18:21:24.650184918 +0000 UTC m=+0.140300634 container attach 956c72f34b3fdef44d837e33293fd4334209f40f71b6a62c038f51b474808037 (image=quay.io/ceph/ceph:v18, name=modest_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:25 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Nov 24 13:21:25 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3443822626' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 24 13:21:25 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 24 13:21:25 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/1423363930' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 24 13:21:25 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/3443822626' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 24 13:21:25 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3443822626' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 24 13:21:25 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Nov 24 13:21:25 np0005533938 modest_ritchie[96550]: enabled application 'rbd' on pool 'backups'
Nov 24 13:21:25 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Nov 24 13:21:25 np0005533938 systemd[1]: libpod-956c72f34b3fdef44d837e33293fd4334209f40f71b6a62c038f51b474808037.scope: Deactivated successfully.
Nov 24 13:21:25 np0005533938 podman[96535]: 2025-11-24 18:21:25.193778097 +0000 UTC m=+0.683893863 container died 956c72f34b3fdef44d837e33293fd4334209f40f71b6a62c038f51b474808037 (image=quay.io/ceph/ceph:v18, name=modest_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 13:21:25 np0005533938 systemd[1]: var-lib-containers-storage-overlay-a49a799cd3b65b71d9897b75141c6f618af8421a2e86392fbbd54a371f5e2709-merged.mount: Deactivated successfully.
Nov 24 13:21:25 np0005533938 podman[96535]: 2025-11-24 18:21:25.231591102 +0000 UTC m=+0.721706818 container remove 956c72f34b3fdef44d837e33293fd4334209f40f71b6a62c038f51b474808037 (image=quay.io/ceph/ceph:v18, name=modest_ritchie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:21:25 np0005533938 systemd[1]: libpod-conmon-956c72f34b3fdef44d837e33293fd4334209f40f71b6a62c038f51b474808037.scope: Deactivated successfully.
Nov 24 13:21:25 np0005533938 python3[96612]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:25 np0005533938 podman[96613]: 2025-11-24 18:21:25.540859372 +0000 UTC m=+0.034914632 container create 3a919e337e7d4f922b2b71338ee3b6022201313b5966d2e65ab9780a37406612 (image=quay.io/ceph/ceph:v18, name=relaxed_spence, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 13:21:25 np0005533938 systemd[1]: Started libpod-conmon-3a919e337e7d4f922b2b71338ee3b6022201313b5966d2e65ab9780a37406612.scope.
Nov 24 13:21:25 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:25 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbaa7afdbb2ad284ac656a4327e1f257486979fb228c695e394824e6396d03dd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:25 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbaa7afdbb2ad284ac656a4327e1f257486979fb228c695e394824e6396d03dd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:25 np0005533938 podman[96613]: 2025-11-24 18:21:25.608732097 +0000 UTC m=+0.102787357 container init 3a919e337e7d4f922b2b71338ee3b6022201313b5966d2e65ab9780a37406612 (image=quay.io/ceph/ceph:v18, name=relaxed_spence, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 13:21:25 np0005533938 podman[96613]: 2025-11-24 18:21:25.615391575 +0000 UTC m=+0.109446835 container start 3a919e337e7d4f922b2b71338ee3b6022201313b5966d2e65ab9780a37406612 (image=quay.io/ceph/ceph:v18, name=relaxed_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 13:21:25 np0005533938 podman[96613]: 2025-11-24 18:21:25.618402321 +0000 UTC m=+0.112457611 container attach 3a919e337e7d4f922b2b71338ee3b6022201313b5966d2e65ab9780a37406612 (image=quay.io/ceph/ceph:v18, name=relaxed_spence, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:21:25 np0005533938 podman[96613]: 2025-11-24 18:21:25.524813547 +0000 UTC m=+0.018868827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:26 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 24 13:21:26 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3223936195' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 24 13:21:26 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 24 13:21:26 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/3443822626' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 24 13:21:26 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/3223936195' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 24 13:21:26 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3223936195' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 24 13:21:26 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Nov 24 13:21:26 np0005533938 relaxed_spence[96629]: enabled application 'rbd' on pool 'images'
Nov 24 13:21:26 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Nov 24 13:21:26 np0005533938 systemd[1]: libpod-3a919e337e7d4f922b2b71338ee3b6022201313b5966d2e65ab9780a37406612.scope: Deactivated successfully.
Nov 24 13:21:26 np0005533938 podman[96654]: 2025-11-24 18:21:26.237077186 +0000 UTC m=+0.021477933 container died 3a919e337e7d4f922b2b71338ee3b6022201313b5966d2e65ab9780a37406612 (image=quay.io/ceph/ceph:v18, name=relaxed_spence, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 13:21:26 np0005533938 systemd[1]: var-lib-containers-storage-overlay-dbaa7afdbb2ad284ac656a4327e1f257486979fb228c695e394824e6396d03dd-merged.mount: Deactivated successfully.
Nov 24 13:21:26 np0005533938 podman[96654]: 2025-11-24 18:21:26.283871978 +0000 UTC m=+0.068272695 container remove 3a919e337e7d4f922b2b71338ee3b6022201313b5966d2e65ab9780a37406612 (image=quay.io/ceph/ceph:v18, name=relaxed_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:26 np0005533938 systemd[1]: libpod-conmon-3a919e337e7d4f922b2b71338ee3b6022201313b5966d2e65ab9780a37406612.scope: Deactivated successfully.
Nov 24 13:21:26 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:26 np0005533938 python3[96695]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:26 np0005533938 podman[96696]: 2025-11-24 18:21:26.63555978 +0000 UTC m=+0.051118792 container create 270c24d2c5734502c65d6e34e2282ecb98e132f163d3bf269321bd9d7900c68a (image=quay.io/ceph/ceph:v18, name=romantic_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:26 np0005533938 systemd[1]: Started libpod-conmon-270c24d2c5734502c65d6e34e2282ecb98e132f163d3bf269321bd9d7900c68a.scope.
Nov 24 13:21:26 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:26 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14bc025b1e99557b72e288e8996d24ff6af67e1b1397497a8ae87fb6aa36bd56/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:26 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14bc025b1e99557b72e288e8996d24ff6af67e1b1397497a8ae87fb6aa36bd56/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:26 np0005533938 podman[96696]: 2025-11-24 18:21:26.694599301 +0000 UTC m=+0.110158333 container init 270c24d2c5734502c65d6e34e2282ecb98e132f163d3bf269321bd9d7900c68a (image=quay.io/ceph/ceph:v18, name=romantic_brahmagupta, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:26 np0005533938 podman[96696]: 2025-11-24 18:21:26.699597577 +0000 UTC m=+0.115156609 container start 270c24d2c5734502c65d6e34e2282ecb98e132f163d3bf269321bd9d7900c68a (image=quay.io/ceph/ceph:v18, name=romantic_brahmagupta, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:21:26 np0005533938 podman[96696]: 2025-11-24 18:21:26.702713306 +0000 UTC m=+0.118272338 container attach 270c24d2c5734502c65d6e34e2282ecb98e132f163d3bf269321bd9d7900c68a (image=quay.io/ceph/ceph:v18, name=romantic_brahmagupta, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 13:21:26 np0005533938 podman[96696]: 2025-11-24 18:21:26.619524195 +0000 UTC m=+0.035083227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:27 np0005533938 ceph-mon[74927]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 13:21:27 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/3223936195' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 24 13:21:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Nov 24 13:21:27 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1036764824' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 24 13:21:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:21:28 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 24 13:21:28 np0005533938 ceph-mon[74927]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 13:21:28 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/1036764824' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 24 13:21:28 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1036764824' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 24 13:21:28 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Nov 24 13:21:28 np0005533938 romantic_brahmagupta[96711]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 24 13:21:28 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Nov 24 13:21:28 np0005533938 systemd[1]: libpod-270c24d2c5734502c65d6e34e2282ecb98e132f163d3bf269321bd9d7900c68a.scope: Deactivated successfully.
Nov 24 13:21:28 np0005533938 podman[96736]: 2025-11-24 18:21:28.279316923 +0000 UTC m=+0.034935783 container died 270c24d2c5734502c65d6e34e2282ecb98e132f163d3bf269321bd9d7900c68a (image=quay.io/ceph/ceph:v18, name=romantic_brahmagupta, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 13:21:28 np0005533938 systemd[1]: var-lib-containers-storage-overlay-14bc025b1e99557b72e288e8996d24ff6af67e1b1397497a8ae87fb6aa36bd56-merged.mount: Deactivated successfully.
Nov 24 13:21:28 np0005533938 podman[96736]: 2025-11-24 18:21:28.325807427 +0000 UTC m=+0.081426287 container remove 270c24d2c5734502c65d6e34e2282ecb98e132f163d3bf269321bd9d7900c68a (image=quay.io/ceph/ceph:v18, name=romantic_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 13:21:28 np0005533938 systemd[1]: libpod-conmon-270c24d2c5734502c65d6e34e2282ecb98e132f163d3bf269321bd9d7900c68a.scope: Deactivated successfully.
Nov 24 13:21:28 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v82: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:28 np0005533938 python3[96776]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:28 np0005533938 podman[96777]: 2025-11-24 18:21:28.665479926 +0000 UTC m=+0.035896068 container create 506e07d7ac00e386af3ea46c1b1d7a81c4e3431015f88f452048679205492d4b (image=quay.io/ceph/ceph:v18, name=friendly_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 13:21:28 np0005533938 systemd[1]: Started libpod-conmon-506e07d7ac00e386af3ea46c1b1d7a81c4e3431015f88f452048679205492d4b.scope.
Nov 24 13:21:28 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:28 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2d2ec248d43e20f18ccf920bc6083fb9e4fe5b440ed6c075c6cdabc30be9e6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:28 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2d2ec248d43e20f18ccf920bc6083fb9e4fe5b440ed6c075c6cdabc30be9e6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:28 np0005533938 podman[96777]: 2025-11-24 18:21:28.650003325 +0000 UTC m=+0.020419487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:28 np0005533938 podman[96777]: 2025-11-24 18:21:28.752292128 +0000 UTC m=+0.122708350 container init 506e07d7ac00e386af3ea46c1b1d7a81c4e3431015f88f452048679205492d4b (image=quay.io/ceph/ceph:v18, name=friendly_williamson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:21:28 np0005533938 podman[96777]: 2025-11-24 18:21:28.756983447 +0000 UTC m=+0.127399579 container start 506e07d7ac00e386af3ea46c1b1d7a81c4e3431015f88f452048679205492d4b (image=quay.io/ceph/ceph:v18, name=friendly_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 24 13:21:28 np0005533938 podman[96777]: 2025-11-24 18:21:28.760619519 +0000 UTC m=+0.131035691 container attach 506e07d7ac00e386af3ea46c1b1d7a81c4e3431015f88f452048679205492d4b (image=quay.io/ceph/ceph:v18, name=friendly_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:21:29 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/1036764824' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 24 13:21:29 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 24 13:21:29 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/871528631' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 24 13:21:30 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 24 13:21:30 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/871528631' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 24 13:21:30 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/871528631' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 24 13:21:30 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Nov 24 13:21:30 np0005533938 friendly_williamson[96792]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 24 13:21:30 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Nov 24 13:21:30 np0005533938 systemd[1]: libpod-506e07d7ac00e386af3ea46c1b1d7a81c4e3431015f88f452048679205492d4b.scope: Deactivated successfully.
Nov 24 13:21:30 np0005533938 podman[96777]: 2025-11-24 18:21:30.26091956 +0000 UTC m=+1.631335712 container died 506e07d7ac00e386af3ea46c1b1d7a81c4e3431015f88f452048679205492d4b (image=quay.io/ceph/ceph:v18, name=friendly_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 13:21:30 np0005533938 systemd[1]: var-lib-containers-storage-overlay-4e2d2ec248d43e20f18ccf920bc6083fb9e4fe5b440ed6c075c6cdabc30be9e6-merged.mount: Deactivated successfully.
Nov 24 13:21:30 np0005533938 podman[96777]: 2025-11-24 18:21:30.295661138 +0000 UTC m=+1.666077280 container remove 506e07d7ac00e386af3ea46c1b1d7a81c4e3431015f88f452048679205492d4b (image=quay.io/ceph/ceph:v18, name=friendly_williamson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:30 np0005533938 systemd[1]: libpod-conmon-506e07d7ac00e386af3ea46c1b1d7a81c4e3431015f88f452048679205492d4b.scope: Deactivated successfully.
Nov 24 13:21:30 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v84: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:31 np0005533938 python3[96902]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 13:21:31 np0005533938 ceph-mon[74927]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 24 13:21:31 np0005533938 ceph-mon[74927]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 24 13:21:31 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/871528631' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 24 13:21:31 np0005533938 python3[96973]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764008490.9135487-36807-49611141934198/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:21:32 np0005533938 ceph-mon[74927]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 24 13:21:32 np0005533938 ceph-mon[74927]: Cluster is now healthy
Nov 24 13:21:32 np0005533938 python3[97075]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 13:21:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:21:32 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v85: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:32 np0005533938 python3[97150]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764008491.9129853-36821-134738530745701/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=16f341f65597afdb4f6379924c5b911b6b6b7430 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:21:33 np0005533938 python3[97200]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:33 np0005533938 podman[97201]: 2025-11-24 18:21:33.060218298 +0000 UTC m=+0.039131653 container create 102f3d550bee38263bfe35a79d1acf215a5f75a52437babda1fc4341f31e87b4 (image=quay.io/ceph/ceph:v18, name=agitated_darwin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:33 np0005533938 systemd[1]: Started libpod-conmon-102f3d550bee38263bfe35a79d1acf215a5f75a52437babda1fc4341f31e87b4.scope.
Nov 24 13:21:33 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:33 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae55e0299dd643136efcbe80a8295a22eebf9dd683f24cd83d8211559b52be77/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:33 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae55e0299dd643136efcbe80a8295a22eebf9dd683f24cd83d8211559b52be77/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:33 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae55e0299dd643136efcbe80a8295a22eebf9dd683f24cd83d8211559b52be77/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:33 np0005533938 podman[97201]: 2025-11-24 18:21:33.131207501 +0000 UTC m=+0.110120886 container init 102f3d550bee38263bfe35a79d1acf215a5f75a52437babda1fc4341f31e87b4 (image=quay.io/ceph/ceph:v18, name=agitated_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 13:21:33 np0005533938 podman[97201]: 2025-11-24 18:21:33.135673892 +0000 UTC m=+0.114587247 container start 102f3d550bee38263bfe35a79d1acf215a5f75a52437babda1fc4341f31e87b4 (image=quay.io/ceph/ceph:v18, name=agitated_darwin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 13:21:33 np0005533938 podman[97201]: 2025-11-24 18:21:33.04097201 +0000 UTC m=+0.019885395 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:33 np0005533938 podman[97201]: 2025-11-24 18:21:33.138566804 +0000 UTC m=+0.117480159 container attach 102f3d550bee38263bfe35a79d1acf215a5f75a52437babda1fc4341f31e87b4 (image=quay.io/ceph/ceph:v18, name=agitated_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 13:21:33 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 24 13:21:33 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4075813533' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 24 13:21:33 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4075813533' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 24 13:21:33 np0005533938 agitated_darwin[97217]: 
Nov 24 13:21:33 np0005533938 agitated_darwin[97217]: [global]
Nov 24 13:21:33 np0005533938 agitated_darwin[97217]: #011fsid = e5ee928f-099b-569b-93c9-ecf025cbb50d
Nov 24 13:21:33 np0005533938 agitated_darwin[97217]: #011mon_host = 192.168.122.100
Nov 24 13:21:33 np0005533938 systemd[1]: libpod-102f3d550bee38263bfe35a79d1acf215a5f75a52437babda1fc4341f31e87b4.scope: Deactivated successfully.
Nov 24 13:21:33 np0005533938 podman[97201]: 2025-11-24 18:21:33.671370483 +0000 UTC m=+0.650283848 container died 102f3d550bee38263bfe35a79d1acf215a5f75a52437babda1fc4341f31e87b4 (image=quay.io/ceph/ceph:v18, name=agitated_darwin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 13:21:33 np0005533938 systemd[1]: var-lib-containers-storage-overlay-ae55e0299dd643136efcbe80a8295a22eebf9dd683f24cd83d8211559b52be77-merged.mount: Deactivated successfully.
Nov 24 13:21:33 np0005533938 podman[97201]: 2025-11-24 18:21:33.70788315 +0000 UTC m=+0.686796505 container remove 102f3d550bee38263bfe35a79d1acf215a5f75a52437babda1fc4341f31e87b4 (image=quay.io/ceph/ceph:v18, name=agitated_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 13:21:33 np0005533938 systemd[1]: libpod-conmon-102f3d550bee38263bfe35a79d1acf215a5f75a52437babda1fc4341f31e87b4.scope: Deactivated successfully.
Nov 24 13:21:33 np0005533938 python3[97377]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:34 np0005533938 podman[97379]: 2025-11-24 18:21:34.061753517 +0000 UTC m=+0.052208377 container create f6fa2bda071a4ce0a57d76dde4eb444964ecca162c2de13d2ffbe211e36fbc25 (image=quay.io/ceph/ceph:v18, name=nifty_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 13:21:34 np0005533938 systemd[1]: Started libpod-conmon-f6fa2bda071a4ce0a57d76dde4eb444964ecca162c2de13d2ffbe211e36fbc25.scope.
Nov 24 13:21:34 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:34 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b630ed71b5269f3bcc77e7357f59052b0d714d13bfcef1130b193770e6e05e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:34 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b630ed71b5269f3bcc77e7357f59052b0d714d13bfcef1130b193770e6e05e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:34 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b630ed71b5269f3bcc77e7357f59052b0d714d13bfcef1130b193770e6e05e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:34 np0005533938 podman[97379]: 2025-11-24 18:21:34.044509929 +0000 UTC m=+0.034964789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:34 np0005533938 podman[97379]: 2025-11-24 18:21:34.159667968 +0000 UTC m=+0.150122828 container init f6fa2bda071a4ce0a57d76dde4eb444964ecca162c2de13d2ffbe211e36fbc25 (image=quay.io/ceph/ceph:v18, name=nifty_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 13:21:34 np0005533938 podman[97379]: 2025-11-24 18:21:34.166564479 +0000 UTC m=+0.157019319 container start f6fa2bda071a4ce0a57d76dde4eb444964ecca162c2de13d2ffbe211e36fbc25 (image=quay.io/ceph/ceph:v18, name=nifty_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:34 np0005533938 podman[97379]: 2025-11-24 18:21:34.169948333 +0000 UTC m=+0.160403193 container attach f6fa2bda071a4ce0a57d76dde4eb444964ecca162c2de13d2ffbe211e36fbc25 (image=quay.io/ceph/ceph:v18, name=nifty_dubinsky, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:34 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/4075813533' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 24 13:21:34 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/4075813533' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 24 13:21:34 np0005533938 podman[97464]: 2025-11-24 18:21:34.424387291 +0000 UTC m=+0.050524665 container exec 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v86: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:21:34
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'vms', 'backups', 'images', 'volumes', 'cephfs.cephfs.meta']
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:21:34 np0005533938 podman[97464]: 2025-11-24 18:21:34.529764388 +0000 UTC m=+0.155901752 container exec_died 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 13:21:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 13:21:34 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:21:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:21:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Nov 24 13:21:34 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1665243259' entity='client.admin' 
Nov 24 13:21:34 np0005533938 nifty_dubinsky[97417]: set ssl_option
Nov 24 13:21:34 np0005533938 systemd[1]: libpod-f6fa2bda071a4ce0a57d76dde4eb444964ecca162c2de13d2ffbe211e36fbc25.scope: Deactivated successfully.
Nov 24 13:21:34 np0005533938 podman[97379]: 2025-11-24 18:21:34.813243897 +0000 UTC m=+0.803698737 container died f6fa2bda071a4ce0a57d76dde4eb444964ecca162c2de13d2ffbe211e36fbc25 (image=quay.io/ceph/ceph:v18, name=nifty_dubinsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 13:21:34 np0005533938 systemd[1]: var-lib-containers-storage-overlay-23b630ed71b5269f3bcc77e7357f59052b0d714d13bfcef1130b193770e6e05e-merged.mount: Deactivated successfully.
Nov 24 13:21:34 np0005533938 podman[97379]: 2025-11-24 18:21:34.853005864 +0000 UTC m=+0.843460704 container remove f6fa2bda071a4ce0a57d76dde4eb444964ecca162c2de13d2ffbe211e36fbc25 (image=quay.io/ceph/ceph:v18, name=nifty_dubinsky, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:34 np0005533938 systemd[1]: libpod-conmon-f6fa2bda071a4ce0a57d76dde4eb444964ecca162c2de13d2ffbe211e36fbc25.scope: Deactivated successfully.
Nov 24 13:21:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:35 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev d08a0c24-e797-4c03-ae7e-a8d0c69d3730 does not exist
Nov 24 13:21:35 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 30f0eaec-d29c-475c-9ba6-c8d6b26d874b does not exist
Nov 24 13:21:35 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 6a28b409-90a2-4665-9d64-0f54ca9cfdab does not exist
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:21:35 np0005533938 python3[97644]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:35 np0005533938 podman[97719]: 2025-11-24 18:21:35.261945599 +0000 UTC m=+0.038064677 container create 67a9ecf47ff818502d6ab6cc0a7e8bc8c83f9ca53f66f13c71292066d57cd0e5 (image=quay.io/ceph/ceph:v18, name=hungry_bassi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/1665243259' entity='client.admin' 
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Nov 24 13:21:35 np0005533938 ceph-mgr[75218]: [progress INFO root] update: starting ev ed1dd8bb-0ade-4f15-b635-ab212938f2fe (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 13:21:35 np0005533938 systemd[1]: Started libpod-conmon-67a9ecf47ff818502d6ab6cc0a7e8bc8c83f9ca53f66f13c71292066d57cd0e5.scope.
Nov 24 13:21:35 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:35 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997d7a3446af536534d49453f0aa2f3b65a92fcb22ff089e21d3be4e1ed75cba/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:35 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997d7a3446af536534d49453f0aa2f3b65a92fcb22ff089e21d3be4e1ed75cba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:35 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997d7a3446af536534d49453f0aa2f3b65a92fcb22ff089e21d3be4e1ed75cba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:35 np0005533938 podman[97719]: 2025-11-24 18:21:35.339480213 +0000 UTC m=+0.115599311 container init 67a9ecf47ff818502d6ab6cc0a7e8bc8c83f9ca53f66f13c71292066d57cd0e5 (image=quay.io/ceph/ceph:v18, name=hungry_bassi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 13:21:35 np0005533938 podman[97719]: 2025-11-24 18:21:35.24668672 +0000 UTC m=+0.022805808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:35 np0005533938 podman[97719]: 2025-11-24 18:21:35.345948063 +0000 UTC m=+0.122067141 container start 67a9ecf47ff818502d6ab6cc0a7e8bc8c83f9ca53f66f13c71292066d57cd0e5 (image=quay.io/ceph/ceph:v18, name=hungry_bassi, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:35 np0005533938 podman[97719]: 2025-11-24 18:21:35.348684161 +0000 UTC m=+0.124803239 container attach 67a9ecf47ff818502d6ab6cc0a7e8bc8c83f9ca53f66f13c71292066d57cd0e5 (image=quay.io/ceph/ceph:v18, name=hungry_bassi, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:21:35 np0005533938 podman[97803]: 2025-11-24 18:21:35.5822171 +0000 UTC m=+0.040412604 container create b2b544daddfc90baf3fccd14a7ddfa8c74744fae55a384d0098050b03375d249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_rhodes, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 13:21:35 np0005533938 systemd[1]: Started libpod-conmon-b2b544daddfc90baf3fccd14a7ddfa8c74744fae55a384d0098050b03375d249.scope.
Nov 24 13:21:35 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:35 np0005533938 podman[97803]: 2025-11-24 18:21:35.645509582 +0000 UTC m=+0.103705086 container init b2b544daddfc90baf3fccd14a7ddfa8c74744fae55a384d0098050b03375d249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_rhodes, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 13:21:35 np0005533938 podman[97803]: 2025-11-24 18:21:35.650721941 +0000 UTC m=+0.108917435 container start b2b544daddfc90baf3fccd14a7ddfa8c74744fae55a384d0098050b03375d249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:21:35 np0005533938 podman[97803]: 2025-11-24 18:21:35.653758507 +0000 UTC m=+0.111954031 container attach b2b544daddfc90baf3fccd14a7ddfa8c74744fae55a384d0098050b03375d249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 13:21:35 np0005533938 jolly_rhodes[97818]: 167 167
Nov 24 13:21:35 np0005533938 systemd[1]: libpod-b2b544daddfc90baf3fccd14a7ddfa8c74744fae55a384d0098050b03375d249.scope: Deactivated successfully.
Nov 24 13:21:35 np0005533938 podman[97803]: 2025-11-24 18:21:35.655030908 +0000 UTC m=+0.113226412 container died b2b544daddfc90baf3fccd14a7ddfa8c74744fae55a384d0098050b03375d249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_rhodes, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:21:35 np0005533938 podman[97803]: 2025-11-24 18:21:35.56568185 +0000 UTC m=+0.023877384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:35 np0005533938 systemd[1]: var-lib-containers-storage-overlay-56c39e3a35d63ba7d0a0792f04bc7b95d1083c2548199e79c24c175ffbcb2792-merged.mount: Deactivated successfully.
Nov 24 13:21:35 np0005533938 podman[97803]: 2025-11-24 18:21:35.686378427 +0000 UTC m=+0.144573921 container remove b2b544daddfc90baf3fccd14a7ddfa8c74744fae55a384d0098050b03375d249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:35 np0005533938 systemd[1]: libpod-conmon-b2b544daddfc90baf3fccd14a7ddfa8c74744fae55a384d0098050b03375d249.scope: Deactivated successfully.
Nov 24 13:21:35 np0005533938 podman[97862]: 2025-11-24 18:21:35.846137613 +0000 UTC m=+0.050023783 container create f50666e6f2cc17d306a03b9c8d8988856d7461776adca9a4ae504319596d6388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 13:21:35 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:21:35 np0005533938 ceph-mgr[75218]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Nov 24 13:21:35 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 24 13:21:35 np0005533938 systemd[1]: Started libpod-conmon-f50666e6f2cc17d306a03b9c8d8988856d7461776adca9a4ae504319596d6388.scope.
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 24 13:21:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:35 np0005533938 hungry_bassi[97760]: Scheduled rgw.rgw update...
Nov 24 13:21:35 np0005533938 podman[97719]: 2025-11-24 18:21:35.915986238 +0000 UTC m=+0.692105356 container died 67a9ecf47ff818502d6ab6cc0a7e8bc8c83f9ca53f66f13c71292066d57cd0e5 (image=quay.io/ceph/ceph:v18, name=hungry_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:35 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:35 np0005533938 systemd[1]: libpod-67a9ecf47ff818502d6ab6cc0a7e8bc8c83f9ca53f66f13c71292066d57cd0e5.scope: Deactivated successfully.
Nov 24 13:21:35 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c23b481e2f588c59e78f9cbdc47449b5ec69523cc16487750f505dc46227f91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:35 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c23b481e2f588c59e78f9cbdc47449b5ec69523cc16487750f505dc46227f91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:35 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c23b481e2f588c59e78f9cbdc47449b5ec69523cc16487750f505dc46227f91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:35 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c23b481e2f588c59e78f9cbdc47449b5ec69523cc16487750f505dc46227f91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:35 np0005533938 podman[97862]: 2025-11-24 18:21:35.831381137 +0000 UTC m=+0.035267337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:35 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c23b481e2f588c59e78f9cbdc47449b5ec69523cc16487750f505dc46227f91/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:35 np0005533938 podman[97862]: 2025-11-24 18:21:35.940396034 +0000 UTC m=+0.144282224 container init f50666e6f2cc17d306a03b9c8d8988856d7461776adca9a4ae504319596d6388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:21:35 np0005533938 systemd[1]: var-lib-containers-storage-overlay-997d7a3446af536534d49453f0aa2f3b65a92fcb22ff089e21d3be4e1ed75cba-merged.mount: Deactivated successfully.
Nov 24 13:21:35 np0005533938 podman[97862]: 2025-11-24 18:21:35.948990357 +0000 UTC m=+0.152876537 container start f50666e6f2cc17d306a03b9c8d8988856d7461776adca9a4ae504319596d6388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_einstein, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 13:21:35 np0005533938 podman[97862]: 2025-11-24 18:21:35.953238933 +0000 UTC m=+0.157125133 container attach f50666e6f2cc17d306a03b9c8d8988856d7461776adca9a4ae504319596d6388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_einstein, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 13:21:35 np0005533938 podman[97719]: 2025-11-24 18:21:35.967862786 +0000 UTC m=+0.743981864 container remove 67a9ecf47ff818502d6ab6cc0a7e8bc8c83f9ca53f66f13c71292066d57cd0e5 (image=quay.io/ceph/ceph:v18, name=hungry_bassi, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 24 13:21:35 np0005533938 systemd[1]: libpod-conmon-67a9ecf47ff818502d6ab6cc0a7e8bc8c83f9ca53f66f13c71292066d57cd0e5.scope: Deactivated successfully.
Nov 24 13:21:36 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 24 13:21:36 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 24 13:21:36 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Nov 24 13:21:36 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Nov 24 13:21:36 np0005533938 ceph-mgr[75218]: [progress INFO root] update: starting ev ce48b40b-ae73-4c57-b147-b639b9742672 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 24 13:21:36 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 13:21:36 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 13:21:36 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 24 13:21:36 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 13:21:36 np0005533938 ceph-mon[74927]: Saving service rgw.rgw spec with placement compute-0
Nov 24 13:21:36 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:36 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v89: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:36 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 13:21:36 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 13:21:36 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 13:21:36 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 13:21:36 np0005533938 python3[97983]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 13:21:36 np0005533938 boring_einstein[97879]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:21:36 np0005533938 boring_einstein[97879]: --> relative data size: 1.0
Nov 24 13:21:36 np0005533938 boring_einstein[97879]: --> All data devices are unavailable
Nov 24 13:21:37 np0005533938 systemd[1]: libpod-f50666e6f2cc17d306a03b9c8d8988856d7461776adca9a4ae504319596d6388.scope: Deactivated successfully.
Nov 24 13:21:37 np0005533938 systemd[1]: libpod-f50666e6f2cc17d306a03b9c8d8988856d7461776adca9a4ae504319596d6388.scope: Consumed 1.019s CPU time.
Nov 24 13:21:37 np0005533938 podman[97862]: 2025-11-24 18:21:37.01835265 +0000 UTC m=+1.222238830 container died f50666e6f2cc17d306a03b9c8d8988856d7461776adca9a4ae504319596d6388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_einstein, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:37 np0005533938 systemd[1]: var-lib-containers-storage-overlay-8c23b481e2f588c59e78f9cbdc47449b5ec69523cc16487750f505dc46227f91-merged.mount: Deactivated successfully.
Nov 24 13:21:37 np0005533938 podman[97862]: 2025-11-24 18:21:37.068556007 +0000 UTC m=+1.272442177 container remove f50666e6f2cc17d306a03b9c8d8988856d7461776adca9a4ae504319596d6388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_einstein, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:37 np0005533938 systemd[1]: libpod-conmon-f50666e6f2cc17d306a03b9c8d8988856d7461776adca9a4ae504319596d6388.scope: Deactivated successfully.
Nov 24 13:21:37 np0005533938 python3[98098]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764008496.6042538-36862-181779954170689/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:21:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 24 13:21:37 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 24 13:21:37 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 13:21:37 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 13:21:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Nov 24 13:21:37 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Nov 24 13:21:37 np0005533938 ceph-mgr[75218]: [progress INFO root] update: starting ev 21277009-2326-4879-8df2-e125e4065fb1 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 24 13:21:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 13:21:37 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 13:21:37 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=38 pruub=10.759669304s) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active pruub 67.664543152s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:37 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=38 pruub=10.759669304s) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown pruub 67.664543152s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:37 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 24 13:21:37 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 13:21:37 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 13:21:37 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 13:21:37 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 24 13:21:37 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 13:21:37 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 13:21:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:21:37 np0005533938 podman[98269]: 2025-11-24 18:21:37.650254251 +0000 UTC m=+0.045368828 container create 889b288ebba50f35fea72f0703d50588abf71a018af62076f58214d0e7deece0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:21:37 np0005533938 systemd[1]: Started libpod-conmon-889b288ebba50f35fea72f0703d50588abf71a018af62076f58214d0e7deece0.scope.
Nov 24 13:21:37 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:37 np0005533938 podman[98269]: 2025-11-24 18:21:37.715638765 +0000 UTC m=+0.110753342 container init 889b288ebba50f35fea72f0703d50588abf71a018af62076f58214d0e7deece0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 13:21:37 np0005533938 podman[98269]: 2025-11-24 18:21:37.723670434 +0000 UTC m=+0.118785001 container start 889b288ebba50f35fea72f0703d50588abf71a018af62076f58214d0e7deece0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 13:21:37 np0005533938 podman[98269]: 2025-11-24 18:21:37.726032423 +0000 UTC m=+0.121146990 container attach 889b288ebba50f35fea72f0703d50588abf71a018af62076f58214d0e7deece0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 13:21:37 np0005533938 podman[98269]: 2025-11-24 18:21:37.629767012 +0000 UTC m=+0.024881629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:37 np0005533938 systemd[1]: libpod-889b288ebba50f35fea72f0703d50588abf71a018af62076f58214d0e7deece0.scope: Deactivated successfully.
Nov 24 13:21:37 np0005533938 xenodochial_boyd[98286]: 167 167
Nov 24 13:21:37 np0005533938 conmon[98286]: conmon 889b288ebba50f35fea7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-889b288ebba50f35fea72f0703d50588abf71a018af62076f58214d0e7deece0.scope/container/memory.events
Nov 24 13:21:37 np0005533938 podman[98269]: 2025-11-24 18:21:37.729653943 +0000 UTC m=+0.124768510 container died 889b288ebba50f35fea72f0703d50588abf71a018af62076f58214d0e7deece0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_boyd, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:21:37 np0005533938 python3[98268]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:37 np0005533938 systemd[1]: var-lib-containers-storage-overlay-70abd7b423c3db90603f3514dbfb1dc74b060546d5efebab079f905a3c5e55f8-merged.mount: Deactivated successfully.
Nov 24 13:21:37 np0005533938 podman[98269]: 2025-11-24 18:21:37.806389198 +0000 UTC m=+0.201503765 container remove 889b288ebba50f35fea72f0703d50588abf71a018af62076f58214d0e7deece0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_boyd, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 13:21:37 np0005533938 systemd[1]: libpod-conmon-889b288ebba50f35fea72f0703d50588abf71a018af62076f58214d0e7deece0.scope: Deactivated successfully.
Nov 24 13:21:37 np0005533938 podman[98292]: 2025-11-24 18:21:37.847763985 +0000 UTC m=+0.085809001 container create 368585f5c544ac87a94a3aa46ed80732abb505072e3359e338acea5a8132a647 (image=quay.io/ceph/ceph:v18, name=peaceful_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:21:37 np0005533938 systemd[1]: Started libpod-conmon-368585f5c544ac87a94a3aa46ed80732abb505072e3359e338acea5a8132a647.scope.
Nov 24 13:21:37 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:37 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a0bf44b6b97d6a0975662db4effe849eaa7897d67fceabd93bfd78785b4d3e6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:37 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a0bf44b6b97d6a0975662db4effe849eaa7897d67fceabd93bfd78785b4d3e6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:37 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a0bf44b6b97d6a0975662db4effe849eaa7897d67fceabd93bfd78785b4d3e6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:37 np0005533938 podman[98292]: 2025-11-24 18:21:37.904035343 +0000 UTC m=+0.142080419 container init 368585f5c544ac87a94a3aa46ed80732abb505072e3359e338acea5a8132a647 (image=quay.io/ceph/ceph:v18, name=peaceful_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 13:21:37 np0005533938 podman[98292]: 2025-11-24 18:21:37.911841516 +0000 UTC m=+0.149886512 container start 368585f5c544ac87a94a3aa46ed80732abb505072e3359e338acea5a8132a647 (image=quay.io/ceph/ceph:v18, name=peaceful_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 13:21:37 np0005533938 podman[98292]: 2025-11-24 18:21:37.815693519 +0000 UTC m=+0.053738545 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:37 np0005533938 podman[98292]: 2025-11-24 18:21:37.91520974 +0000 UTC m=+0.153254806 container attach 368585f5c544ac87a94a3aa46ed80732abb505072e3359e338acea5a8132a647 (image=quay.io/ceph/ceph:v18, name=peaceful_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 13:21:37 np0005533938 podman[98327]: 2025-11-24 18:21:37.957663954 +0000 UTC m=+0.042026834 container create 0e4463cfd6dea56fac23dc07b92a21f340b211fe3d083fd2dfe9fc258169a641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 13:21:37 np0005533938 systemd[1]: Started libpod-conmon-0e4463cfd6dea56fac23dc07b92a21f340b211fe3d083fd2dfe9fc258169a641.scope.
Nov 24 13:21:38 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51f032e7a6cc6f7ae647a48093488b8acba761ae38f45e3e83e1b7aeb9c193c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51f032e7a6cc6f7ae647a48093488b8acba761ae38f45e3e83e1b7aeb9c193c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51f032e7a6cc6f7ae647a48093488b8acba761ae38f45e3e83e1b7aeb9c193c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51f032e7a6cc6f7ae647a48093488b8acba761ae38f45e3e83e1b7aeb9c193c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:38 np0005533938 podman[98327]: 2025-11-24 18:21:37.942073107 +0000 UTC m=+0.026436017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:38 np0005533938 podman[98327]: 2025-11-24 18:21:38.057179025 +0000 UTC m=+0.141541975 container init 0e4463cfd6dea56fac23dc07b92a21f340b211fe3d083fd2dfe9fc258169a641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:21:38 np0005533938 podman[98327]: 2025-11-24 18:21:38.065209705 +0000 UTC m=+0.149572605 container start 0e4463cfd6dea56fac23dc07b92a21f340b211fe3d083fd2dfe9fc258169a641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 13:21:38 np0005533938 podman[98327]: 2025-11-24 18:21:38.068110097 +0000 UTC m=+0.152473007 container attach 0e4463cfd6dea56fac23dc07b92a21f340b211fe3d083fd2dfe9fc258169a641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Nov 24 13:21:38 np0005533938 ceph-mgr[75218]: [progress INFO root] update: starting ev 304c0f2e-06cd-431f-aba9-c7d965d83074 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1f( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1e( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1c( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1d( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.a( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1b( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.9( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.8( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.7( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.6( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.5( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.3( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.4( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.2( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.b( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.c( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.e( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.d( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.f( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.10( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.11( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.12( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.13( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.14( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.15( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.16( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.17( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.18( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1a( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.19( empty local-lis/les=23/24 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1f( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1c( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1e( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1d( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.a( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.9( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1b( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.8( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.7( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.6( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.5( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.3( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.0( empty local-lis/les=38/39 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.4( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.2( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.f( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.c( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.e( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.b( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.10( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.11( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.d( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.12( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.15( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.13( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.14( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.17( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.18( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.16( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.19( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 39 pg[3.1a( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=23/23 les/c/f=24/24/0 sis=38) [1] r=0 lpr=38 pi=[23,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:38 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:21:38 np0005533938 ceph-mgr[75218]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 24 13:21:38 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0[74923]: 2025-11-24T18:21:38.430+0000 7f94aeb23640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).mds e2 new map
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-24T18:21:38.431267+0000#012modified#0112025-11-24T18:21:38.431323+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 24 13:21:38 np0005533938 ceph-mgr[75218]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 24 13:21:38 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 24 13:21:38 np0005533938 ceph-mgr[75218]: [progress INFO root] update: starting ev 288afbf3-be1e-45d6-9c0a-4665b2bdc2d8 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:38 np0005533938 ceph-mgr[75218]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 24 13:21:38 np0005533938 systemd[1]: libpod-368585f5c544ac87a94a3aa46ed80732abb505072e3359e338acea5a8132a647.scope: Deactivated successfully.
Nov 24 13:21:38 np0005533938 podman[98292]: 2025-11-24 18:21:38.472709933 +0000 UTC m=+0.710754949 container died 368585f5c544ac87a94a3aa46ed80732abb505072e3359e338acea5a8132a647 (image=quay.io/ceph/ceph:v18, name=peaceful_heisenberg, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:21:38 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v93: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 13:21:38 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 13:21:38 np0005533938 systemd[1]: var-lib-containers-storage-overlay-8a0bf44b6b97d6a0975662db4effe849eaa7897d67fceabd93bfd78785b4d3e6-merged.mount: Deactivated successfully.
Nov 24 13:21:38 np0005533938 podman[98292]: 2025-11-24 18:21:38.516296355 +0000 UTC m=+0.754341361 container remove 368585f5c544ac87a94a3aa46ed80732abb505072e3359e338acea5a8132a647 (image=quay.io/ceph/ceph:v18, name=peaceful_heisenberg, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 13:21:38 np0005533938 systemd[1]: libpod-conmon-368585f5c544ac87a94a3aa46ed80732abb505072e3359e338acea5a8132a647.scope: Deactivated successfully.
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]: {
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:    "0": [
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:        {
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "devices": [
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "/dev/loop3"
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            ],
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "lv_name": "ceph_lv0",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "lv_size": "21470642176",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "name": "ceph_lv0",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "tags": {
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.cluster_name": "ceph",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.crush_device_class": "",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.encrypted": "0",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.osd_id": "0",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.type": "block",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.vdo": "0"
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            },
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "type": "block",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "vg_name": "ceph_vg0"
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:        }
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:    ],
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:    "1": [
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:        {
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "devices": [
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "/dev/loop4"
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            ],
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "lv_name": "ceph_lv1",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "lv_size": "21470642176",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "name": "ceph_lv1",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "tags": {
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.cluster_name": "ceph",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.crush_device_class": "",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.encrypted": "0",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.osd_id": "1",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.type": "block",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.vdo": "0"
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            },
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "type": "block",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "vg_name": "ceph_vg1"
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:        }
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:    ],
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:    "2": [
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:        {
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "devices": [
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "/dev/loop5"
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            ],
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "lv_name": "ceph_lv2",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "lv_size": "21470642176",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "name": "ceph_lv2",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "tags": {
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.cluster_name": "ceph",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.crush_device_class": "",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.encrypted": "0",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.osd_id": "2",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.type": "block",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:                "ceph.vdo": "0"
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            },
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "type": "block",
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:            "vg_name": "ceph_vg2"
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:        }
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]:    ]
Nov 24 13:21:38 np0005533938 gifted_wozniak[98346]: }
Nov 24 13:21:38 np0005533938 python3[98408]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:38 np0005533938 systemd[1]: libpod-0e4463cfd6dea56fac23dc07b92a21f340b211fe3d083fd2dfe9fc258169a641.scope: Deactivated successfully.
Nov 24 13:21:38 np0005533938 podman[98327]: 2025-11-24 18:21:38.838990988 +0000 UTC m=+0.923353888 container died 0e4463cfd6dea56fac23dc07b92a21f340b211fe3d083fd2dfe9fc258169a641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 13:21:38 np0005533938 systemd[1]: var-lib-containers-storage-overlay-51f032e7a6cc6f7ae647a48093488b8acba761ae38f45e3e83e1b7aeb9c193c8-merged.mount: Deactivated successfully.
Nov 24 13:21:38 np0005533938 podman[98413]: 2025-11-24 18:21:38.889697036 +0000 UTC m=+0.051236312 container create 0ae5bfdf2163437a0526b58ae48385831bc36385da002291643a7329577f0e7e (image=quay.io/ceph/ceph:v18, name=awesome_shtern, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 13:21:38 np0005533938 podman[98327]: 2025-11-24 18:21:38.89668022 +0000 UTC m=+0.981043110 container remove 0e4463cfd6dea56fac23dc07b92a21f340b211fe3d083fd2dfe9fc258169a641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wozniak, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:38 np0005533938 systemd[1]: libpod-conmon-0e4463cfd6dea56fac23dc07b92a21f340b211fe3d083fd2dfe9fc258169a641.scope: Deactivated successfully.
Nov 24 13:21:38 np0005533938 systemd[1]: Started libpod-conmon-0ae5bfdf2163437a0526b58ae48385831bc36385da002291643a7329577f0e7e.scope.
Nov 24 13:21:38 np0005533938 podman[98413]: 2025-11-24 18:21:38.869587137 +0000 UTC m=+0.031126433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:38 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5033a9e6e9d69bf6334452aaedf643d3530a720ceab0c32773d2f5403e3dad4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5033a9e6e9d69bf6334452aaedf643d3530a720ceab0c32773d2f5403e3dad4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5033a9e6e9d69bf6334452aaedf643d3530a720ceab0c32773d2f5403e3dad4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:38 np0005533938 podman[98413]: 2025-11-24 18:21:38.97883234 +0000 UTC m=+0.140371676 container init 0ae5bfdf2163437a0526b58ae48385831bc36385da002291643a7329577f0e7e (image=quay.io/ceph/ceph:v18, name=awesome_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 13:21:38 np0005533938 podman[98413]: 2025-11-24 18:21:38.985497255 +0000 UTC m=+0.147036531 container start 0ae5bfdf2163437a0526b58ae48385831bc36385da002291643a7329577f0e7e (image=quay.io/ceph/ceph:v18, name=awesome_shtern, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 13:21:38 np0005533938 podman[98413]: 2025-11-24 18:21:38.988585052 +0000 UTC m=+0.150124318 container attach 0ae5bfdf2163437a0526b58ae48385831bc36385da002291643a7329577f0e7e (image=quay.io/ceph/ceph:v18, name=awesome_shtern, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: Saving service mds.cephfs spec with placement compute-0
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 13:21:39 np0005533938 podman[98600]: 2025-11-24 18:21:39.416066576 +0000 UTC m=+0.034919848 container create 1505b29270d3d32179b5fcf28716171c194563609926ca4c8d2d1bdec533c67b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 24 13:21:39 np0005533938 systemd[1]: Started libpod-conmon-1505b29270d3d32179b5fcf28716171c194563609926ca4c8d2d1bdec533c67b.scope.
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 24 13:21:39 np0005533938 ceph-mgr[75218]: [progress INFO root] update: starting ev 033bfeea-2045-48ad-993c-b77cee9df009 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 24 13:21:39 np0005533938 ceph-mgr[75218]: [progress INFO root] complete: finished ev ed1dd8bb-0ade-4f15-b635-ab212938f2fe (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 24 13:21:39 np0005533938 ceph-mgr[75218]: [progress INFO root] Completed event ed1dd8bb-0ade-4f15-b635-ab212938f2fe (PG autoscaler increasing pool 2 PGs from 1 to 32) in 4 seconds
Nov 24 13:21:39 np0005533938 ceph-mgr[75218]: [progress INFO root] complete: finished ev ce48b40b-ae73-4c57-b147-b639b9742672 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 24 13:21:39 np0005533938 ceph-mgr[75218]: [progress INFO root] Completed event ce48b40b-ae73-4c57-b147-b639b9742672 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 3 seconds
Nov 24 13:21:39 np0005533938 ceph-mgr[75218]: [progress INFO root] complete: finished ev 21277009-2326-4879-8df2-e125e4065fb1 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 24 13:21:39 np0005533938 ceph-mgr[75218]: [progress INFO root] Completed event 21277009-2326-4879-8df2-e125e4065fb1 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 2 seconds
Nov 24 13:21:39 np0005533938 ceph-mgr[75218]: [progress INFO root] complete: finished ev 304c0f2e-06cd-431f-aba9-c7d965d83074 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 24 13:21:39 np0005533938 ceph-mgr[75218]: [progress INFO root] Completed event 304c0f2e-06cd-431f-aba9-c7d965d83074 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 1 seconds
Nov 24 13:21:39 np0005533938 ceph-mgr[75218]: [progress INFO root] complete: finished ev 288afbf3-be1e-45d6-9c0a-4665b2bdc2d8 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Nov 24 13:21:39 np0005533938 ceph-mgr[75218]: [progress INFO root] Completed event 288afbf3-be1e-45d6-9c0a-4665b2bdc2d8 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Nov 24 13:21:39 np0005533938 ceph-mgr[75218]: [progress INFO root] complete: finished ev 033bfeea-2045-48ad-993c-b77cee9df009 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 24 13:21:39 np0005533938 ceph-mgr[75218]: [progress INFO root] Completed event 033bfeea-2045-48ad-993c-b77cee9df009 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 38 pg[2.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=38 pruub=14.567813873s) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active pruub 66.103454590s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=38 pruub=14.567813873s) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown pruub 66.103454590s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.1( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.2( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.3( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.6( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.7( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.8( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.9( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.1a( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.4( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.5( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.1e( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.1f( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.1c( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.1d( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.18( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.19( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.c( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.b( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.12( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.16( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.17( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.e( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.f( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.10( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.14( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.11( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=41 pruub=11.635890961s) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active pruub 63.178226471s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:39 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=41 pruub=11.635890961s) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown pruub 63.178226471s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:39 np0005533938 podman[98600]: 2025-11-24 18:21:39.487162862 +0000 UTC m=+0.106016134 container init 1505b29270d3d32179b5fcf28716171c194563609926ca4c8d2d1bdec533c67b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 13:21:39 np0005533938 podman[98600]: 2025-11-24 18:21:39.49231561 +0000 UTC m=+0.111168862 container start 1505b29270d3d32179b5fcf28716171c194563609926ca4c8d2d1bdec533c67b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 13:21:39 np0005533938 podman[98600]: 2025-11-24 18:21:39.494838882 +0000 UTC m=+0.113692154 container attach 1505b29270d3d32179b5fcf28716171c194563609926ca4c8d2d1bdec533c67b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hoover, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 24 13:21:39 np0005533938 amazing_hoover[98617]: 167 167
Nov 24 13:21:39 np0005533938 podman[98600]: 2025-11-24 18:21:39.401012102 +0000 UTC m=+0.019865384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:39 np0005533938 systemd[1]: libpod-1505b29270d3d32179b5fcf28716171c194563609926ca4c8d2d1bdec533c67b.scope: Deactivated successfully.
Nov 24 13:21:39 np0005533938 podman[98600]: 2025-11-24 18:21:39.496783201 +0000 UTC m=+0.115636453 container died 1505b29270d3d32179b5fcf28716171c194563609926ca4c8d2d1bdec533c67b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hoover, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 13:21:39 np0005533938 systemd[1]: var-lib-containers-storage-overlay-b54a4f060a0f9cabcfd0c82dc97826166b39e979247fe3465851d1bd89533f60-merged.mount: Deactivated successfully.
Nov 24 13:21:39 np0005533938 podman[98600]: 2025-11-24 18:21:39.524570771 +0000 UTC m=+0.143424023 container remove 1505b29270d3d32179b5fcf28716171c194563609926ca4c8d2d1bdec533c67b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hoover, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:39 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:21:39 np0005533938 ceph-mgr[75218]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 24 13:21:39 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:39 np0005533938 awesome_shtern[98443]: Scheduled mds.cephfs update...
Nov 24 13:21:39 np0005533938 systemd[1]: libpod-conmon-1505b29270d3d32179b5fcf28716171c194563609926ca4c8d2d1bdec533c67b.scope: Deactivated successfully.
Nov 24 13:21:39 np0005533938 systemd[1]: libpod-0ae5bfdf2163437a0526b58ae48385831bc36385da002291643a7329577f0e7e.scope: Deactivated successfully.
Nov 24 13:21:39 np0005533938 podman[98413]: 2025-11-24 18:21:39.551063888 +0000 UTC m=+0.712603164 container died 0ae5bfdf2163437a0526b58ae48385831bc36385da002291643a7329577f0e7e (image=quay.io/ceph/ceph:v18, name=awesome_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 13:21:39 np0005533938 systemd[1]: var-lib-containers-storage-overlay-d5033a9e6e9d69bf6334452aaedf643d3530a720ceab0c32773d2f5403e3dad4-merged.mount: Deactivated successfully.
Nov 24 13:21:39 np0005533938 podman[98413]: 2025-11-24 18:21:39.589880412 +0000 UTC m=+0.751419688 container remove 0ae5bfdf2163437a0526b58ae48385831bc36385da002291643a7329577f0e7e (image=quay.io/ceph/ceph:v18, name=awesome_shtern, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 13:21:39 np0005533938 systemd[1]: libpod-conmon-0ae5bfdf2163437a0526b58ae48385831bc36385da002291643a7329577f0e7e.scope: Deactivated successfully.
Nov 24 13:21:39 np0005533938 podman[98652]: 2025-11-24 18:21:39.66226444 +0000 UTC m=+0.035903553 container create 74917c36395d7e0e24d5281427c606412e895134cc003063629189df8c87a529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:21:39 np0005533938 systemd[1]: Started libpod-conmon-74917c36395d7e0e24d5281427c606412e895134cc003063629189df8c87a529.scope.
Nov 24 13:21:39 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:39 np0005533938 ceph-mgr[75218]: [progress INFO root] Writing back 9 completed events
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 24 13:21:39 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09acff362b97eea7c684db1d5cbaf565f6b57f559b918bd40175000e1e11a036/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:39 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09acff362b97eea7c684db1d5cbaf565f6b57f559b918bd40175000e1e11a036/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:39 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09acff362b97eea7c684db1d5cbaf565f6b57f559b918bd40175000e1e11a036/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:39 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09acff362b97eea7c684db1d5cbaf565f6b57f559b918bd40175000e1e11a036/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:39 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:39 np0005533938 podman[98652]: 2025-11-24 18:21:39.739716033 +0000 UTC m=+0.113355146 container init 74917c36395d7e0e24d5281427c606412e895134cc003063629189df8c87a529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 13:21:39 np0005533938 podman[98652]: 2025-11-24 18:21:39.645576365 +0000 UTC m=+0.019215498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:39 np0005533938 podman[98652]: 2025-11-24 18:21:39.745751133 +0000 UTC m=+0.119390246 container start 74917c36395d7e0e24d5281427c606412e895134cc003063629189df8c87a529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:39 np0005533938 podman[98652]: 2025-11-24 18:21:39.748735567 +0000 UTC m=+0.122374680 container attach 74917c36395d7e0e24d5281427c606412e895134cc003063629189df8c87a529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:39 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 41 pg[4.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=9.316533089s) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active pruub 75.137977600s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:39 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 41 pg[6.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=41 pruub=12.344822884s) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active pruub 78.166275024s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:39 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 41 pg[4.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=9.316533089s) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown pruub 75.137977600s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:39 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 41 pg[6.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=41 pruub=12.344822884s) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown pruub 78.166275024s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 python3[98752]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 24 13:21:40 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 24 13:21:40 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 13:21:40 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 13:21:40 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 13:21:40 np0005533938 ceph-mon[74927]: Saving service mds.cephfs spec with placement compute-0
Nov 24 13:21:40 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:40 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:40 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 24 13:21:40 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 24 13:21:40 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.15( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.17( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1a( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.16( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.14( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.15( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.17( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.16( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.11( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=26/27 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.10( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.13( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.12( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.d( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.c( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.f( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.c( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.e( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.2( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.3( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.3( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1b( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.6( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.19( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.b( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.18( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.7( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.8( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.19( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.6( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.4( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.b( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.9( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.1a( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.5( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.a( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1e( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1c( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1d( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1e( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1d( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1f( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1f( empty local-lis/les=27/28 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.16( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.15( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.14( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1a( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.15( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.16( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.11( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.17( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.17( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v96: 162 pgs: 124 unknown, 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:40 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 13:21:40 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.10( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.14( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.12( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.10( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.17( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.8( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.a( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.c( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.b( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.0( empty local-lis/les=38/42 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.0( empty local-lis/les=41/42 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.1( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.e( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.6( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1b( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.1e( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=26/26 les/c/f=27/27/0 sis=41) [2] r=0 lpr=41 pi=[26,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.13( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [2] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.12( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.f( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.c( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.c( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.0( empty local-lis/les=41/42 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.0( empty local-lis/les=41/42 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.3( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1b( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.3( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.6( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.10( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.18( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.19( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.7( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.4( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.19( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.b( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.b( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.9( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.5( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.a( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.6( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1e( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1c( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1e( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1f( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1d( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[6.1f( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=27/27 les/c/f=28/28/0 sis=41) [0] r=0 lpr=41 pi=[27,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.1d( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:40 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Nov 24 13:21:40 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Nov 24 13:21:40 np0005533938 python3[98835]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764008499.9765463-36892-81734009311733/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=da81228d7cc67f3a06b39ee156e276fa0a4ebf0e backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:21:40 np0005533938 condescending_carson[98670]: {
Nov 24 13:21:40 np0005533938 condescending_carson[98670]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:21:40 np0005533938 condescending_carson[98670]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:21:40 np0005533938 condescending_carson[98670]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:21:40 np0005533938 condescending_carson[98670]:        "osd_id": 0,
Nov 24 13:21:40 np0005533938 condescending_carson[98670]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:21:40 np0005533938 condescending_carson[98670]:        "type": "bluestore"
Nov 24 13:21:40 np0005533938 condescending_carson[98670]:    },
Nov 24 13:21:40 np0005533938 condescending_carson[98670]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:21:40 np0005533938 condescending_carson[98670]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:21:40 np0005533938 condescending_carson[98670]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:21:40 np0005533938 condescending_carson[98670]:        "osd_id": 1,
Nov 24 13:21:40 np0005533938 condescending_carson[98670]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:21:40 np0005533938 condescending_carson[98670]:        "type": "bluestore"
Nov 24 13:21:40 np0005533938 condescending_carson[98670]:    },
Nov 24 13:21:40 np0005533938 condescending_carson[98670]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:21:40 np0005533938 condescending_carson[98670]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:21:40 np0005533938 condescending_carson[98670]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:21:40 np0005533938 condescending_carson[98670]:        "osd_id": 2,
Nov 24 13:21:40 np0005533938 condescending_carson[98670]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:21:40 np0005533938 condescending_carson[98670]:        "type": "bluestore"
Nov 24 13:21:40 np0005533938 condescending_carson[98670]:    }
Nov 24 13:21:40 np0005533938 condescending_carson[98670]: }
Nov 24 13:21:40 np0005533938 systemd[1]: libpod-74917c36395d7e0e24d5281427c606412e895134cc003063629189df8c87a529.scope: Deactivated successfully.
Nov 24 13:21:40 np0005533938 systemd[1]: libpod-74917c36395d7e0e24d5281427c606412e895134cc003063629189df8c87a529.scope: Consumed 1.002s CPU time.
Nov 24 13:21:40 np0005533938 podman[98652]: 2025-11-24 18:21:40.748498212 +0000 UTC m=+1.122137325 container died 74917c36395d7e0e24d5281427c606412e895134cc003063629189df8c87a529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:40 np0005533938 systemd[1]: var-lib-containers-storage-overlay-09acff362b97eea7c684db1d5cbaf565f6b57f559b918bd40175000e1e11a036-merged.mount: Deactivated successfully.
Nov 24 13:21:40 np0005533938 podman[98652]: 2025-11-24 18:21:40.798333259 +0000 UTC m=+1.171972372 container remove 74917c36395d7e0e24d5281427c606412e895134cc003063629189df8c87a529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 13:21:40 np0005533938 systemd[1]: libpod-conmon-74917c36395d7e0e24d5281427c606412e895134cc003063629189df8c87a529.scope: Deactivated successfully.
Nov 24 13:21:40 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:21:40 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:40 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:21:40 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:41 np0005533938 python3[98990]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:41 np0005533938 podman[99043]: 2025-11-24 18:21:41.19303861 +0000 UTC m=+0.075512776 container create 33a56513b5692696c799ac308e5e8bc1fbd70269d4f346b766948b42156beefd (image=quay.io/ceph/ceph:v18, name=objective_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 13:21:41 np0005533938 podman[99043]: 2025-11-24 18:21:41.137570033 +0000 UTC m=+0.020044209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:41 np0005533938 systemd[1]: Started libpod-conmon-33a56513b5692696c799ac308e5e8bc1fbd70269d4f346b766948b42156beefd.scope.
Nov 24 13:21:41 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:41 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d99034a3dcec2f20796f9ccbf38ab74f84ba5091770537005e2439ba22a349b0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:41 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d99034a3dcec2f20796f9ccbf38ab74f84ba5091770537005e2439ba22a349b0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:41 np0005533938 podman[99043]: 2025-11-24 18:21:41.278537323 +0000 UTC m=+0.161011529 container init 33a56513b5692696c799ac308e5e8bc1fbd70269d4f346b766948b42156beefd (image=quay.io/ceph/ceph:v18, name=objective_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:41 np0005533938 podman[99043]: 2025-11-24 18:21:41.284185063 +0000 UTC m=+0.166659239 container start 33a56513b5692696c799ac308e5e8bc1fbd70269d4f346b766948b42156beefd (image=quay.io/ceph/ceph:v18, name=objective_lamarr, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:41 np0005533938 podman[99043]: 2025-11-24 18:21:41.287956727 +0000 UTC m=+0.170430893 container attach 33a56513b5692696c799ac308e5e8bc1fbd70269d4f346b766948b42156beefd (image=quay.io/ceph/ceph:v18, name=objective_lamarr, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:21:41 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 24 13:21:41 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 13:21:41 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:41 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:41 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 13:21:41 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 24 13:21:41 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 24 13:21:41 np0005533938 podman[99156]: 2025-11-24 18:21:41.583216148 +0000 UTC m=+0.062017701 container exec 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 13:21:41 np0005533938 podman[99156]: 2025-11-24 18:21:41.683494508 +0000 UTC m=+0.162296011 container exec_died 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:21:41 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Nov 24 13:21:41 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/173009766' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 24 13:21:41 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/173009766' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 24 13:21:41 np0005533938 systemd[1]: libpod-33a56513b5692696c799ac308e5e8bc1fbd70269d4f346b766948b42156beefd.scope: Deactivated successfully.
Nov 24 13:21:41 np0005533938 podman[99043]: 2025-11-24 18:21:41.891270258 +0000 UTC m=+0.773744444 container died 33a56513b5692696c799ac308e5e8bc1fbd70269d4f346b766948b42156beefd (image=quay.io/ceph/ceph:v18, name=objective_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:41 np0005533938 systemd[76548]: Starting Mark boot as successful...
Nov 24 13:21:41 np0005533938 systemd[76548]: Finished Mark boot as successful.
Nov 24 13:21:41 np0005533938 systemd[1]: var-lib-containers-storage-overlay-d99034a3dcec2f20796f9ccbf38ab74f84ba5091770537005e2439ba22a349b0-merged.mount: Deactivated successfully.
Nov 24 13:21:41 np0005533938 podman[99043]: 2025-11-24 18:21:41.950444347 +0000 UTC m=+0.832918503 container remove 33a56513b5692696c799ac308e5e8bc1fbd70269d4f346b766948b42156beefd (image=quay.io/ceph/ceph:v18, name=objective_lamarr, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:41 np0005533938 systemd[1]: libpod-conmon-33a56513b5692696c799ac308e5e8bc1fbd70269d4f346b766948b42156beefd.scope: Deactivated successfully.
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:42 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 244c4978-a219-461b-8478-c00dfaec020c does not exist
Nov 24 13:21:42 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev d7862420-66f0-4daa-a174-5dddf2dad6c5 does not exist
Nov 24 13:21:42 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 2c1d3e0c-d978-40ca-8c5c-3a70074592e1 does not exist
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:21:42 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v98: 193 pgs: 155 unknown, 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/173009766' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/173009766' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:42 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:21:42 np0005533938 python3[99384]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 43 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=43 pruub=10.489816666s) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active pruub 72.722373962s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 43 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=43 pruub=10.489816666s) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown pruub 72.722373962s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:42 np0005533938 podman[99436]: 2025-11-24 18:21:42.65127706 +0000 UTC m=+0.049639474 container create e6ebb17cae0841fd8b98f98277fa15d11f2b6490e816ac159dbf8b0814fc189c (image=quay.io/ceph/ceph:v18, name=angry_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 13:21:42 np0005533938 systemd[1]: Started libpod-conmon-e6ebb17cae0841fd8b98f98277fa15d11f2b6490e816ac159dbf8b0814fc189c.scope.
Nov 24 13:21:42 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Nov 24 13:21:42 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Nov 24 13:21:42 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:42 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a5a62ecd660fb53520bd08e8734e25145d67f346126c342e3491804cba44e6b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:42 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a5a62ecd660fb53520bd08e8734e25145d67f346126c342e3491804cba44e6b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:42 np0005533938 podman[99436]: 2025-11-24 18:21:42.714674164 +0000 UTC m=+0.113036608 container init e6ebb17cae0841fd8b98f98277fa15d11f2b6490e816ac159dbf8b0814fc189c (image=quay.io/ceph/ceph:v18, name=angry_dewdney, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 13:21:42 np0005533938 podman[99436]: 2025-11-24 18:21:42.720786086 +0000 UTC m=+0.119148490 container start e6ebb17cae0841fd8b98f98277fa15d11f2b6490e816ac159dbf8b0814fc189c (image=quay.io/ceph/ceph:v18, name=angry_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:42 np0005533938 podman[99436]: 2025-11-24 18:21:42.72340134 +0000 UTC m=+0.121763754 container attach e6ebb17cae0841fd8b98f98277fa15d11f2b6490e816ac159dbf8b0814fc189c (image=quay.io/ceph/ceph:v18, name=angry_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 13:21:42 np0005533938 podman[99436]: 2025-11-24 18:21:42.633810456 +0000 UTC m=+0.032172890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:42 np0005533938 podman[99497]: 2025-11-24 18:21:42.881615919 +0000 UTC m=+0.025082564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:43 np0005533938 podman[99497]: 2025-11-24 18:21:43.036762862 +0000 UTC m=+0.180229497 container create 9b0c7ded593ad1f47bbc2daa9ffbfbb435342a9c497bf51ec319f22b86fc087c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_montalcini, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:21:43 np0005533938 systemd[1]: Started libpod-conmon-9b0c7ded593ad1f47bbc2daa9ffbfbb435342a9c497bf51ec319f22b86fc087c.scope.
Nov 24 13:21:43 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:43 np0005533938 podman[99497]: 2025-11-24 18:21:43.222415771 +0000 UTC m=+0.365882416 container init 9b0c7ded593ad1f47bbc2daa9ffbfbb435342a9c497bf51ec319f22b86fc087c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 13:21:43 np0005533938 podman[99497]: 2025-11-24 18:21:43.227471747 +0000 UTC m=+0.370938372 container start 9b0c7ded593ad1f47bbc2daa9ffbfbb435342a9c497bf51ec319f22b86fc087c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:43 np0005533938 great_montalcini[99533]: 167 167
Nov 24 13:21:43 np0005533938 systemd[1]: libpod-9b0c7ded593ad1f47bbc2daa9ffbfbb435342a9c497bf51ec319f22b86fc087c.scope: Deactivated successfully.
Nov 24 13:21:43 np0005533938 conmon[99533]: conmon 9b0c7ded593ad1f47bbc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9b0c7ded593ad1f47bbc2daa9ffbfbb435342a9c497bf51ec319f22b86fc087c.scope/container/memory.events
Nov 24 13:21:43 np0005533938 podman[99497]: 2025-11-24 18:21:43.292642435 +0000 UTC m=+0.436109080 container attach 9b0c7ded593ad1f47bbc2daa9ffbfbb435342a9c497bf51ec319f22b86fc087c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_montalcini, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 13:21:43 np0005533938 podman[99497]: 2025-11-24 18:21:43.293022375 +0000 UTC m=+0.436489000 container died 9b0c7ded593ad1f47bbc2daa9ffbfbb435342a9c497bf51ec319f22b86fc087c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 13:21:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 24 13:21:43 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/292142713' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 13:21:43 np0005533938 angry_dewdney[99454]: 
Nov 24 13:21:43 np0005533938 angry_dewdney[99454]: {"fsid":"e5ee928f-099b-569b-93c9-ecf025cbb50d","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":175,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":43,"num_osds":3,"num_up_osds":3,"osd_up_since":1764008452,"num_in_osds":3,"osd_in_since":1764008421,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":124},{"state_name":"active+clean","count":38}],"num_pgs":162,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84004864,"bytes_avail":64327921664,"bytes_total":64411926528,"unknown_pgs_ratio":0.76543211936950684},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-24T18:20:36.466398+0000","services":{}},"progress_events":{}}
Nov 24 13:21:43 np0005533938 systemd[1]: libpod-e6ebb17cae0841fd8b98f98277fa15d11f2b6490e816ac159dbf8b0814fc189c.scope: Deactivated successfully.
Nov 24 13:21:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 24 13:21:43 np0005533938 systemd[1]: var-lib-containers-storage-overlay-a01cf1f0b868e1f16b9de09e33e0bda492cae98f3bc79edb82621e97fb3d466c-merged.mount: Deactivated successfully.
Nov 24 13:21:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 24 13:21:43 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 24 13:21:43 np0005533938 podman[99436]: 2025-11-24 18:21:43.5030505 +0000 UTC m=+0.901412934 container died e6ebb17cae0841fd8b98f98277fa15d11f2b6490e816ac159dbf8b0814fc189c (image=quay.io/ceph/ceph:v18, name=angry_dewdney, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1e( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1d( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.12( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.10( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1e( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1d( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.10( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.12( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=43/44 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [1] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:43 np0005533938 systemd[1]: var-lib-containers-storage-overlay-3a5a62ecd660fb53520bd08e8734e25145d67f346126c342e3491804cba44e6b-merged.mount: Deactivated successfully.
Nov 24 13:21:43 np0005533938 podman[99497]: 2025-11-24 18:21:43.541134496 +0000 UTC m=+0.684601121 container remove 9b0c7ded593ad1f47bbc2daa9ffbfbb435342a9c497bf51ec319f22b86fc087c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:43 np0005533938 podman[99551]: 2025-11-24 18:21:43.546446868 +0000 UTC m=+0.210151810 container remove e6ebb17cae0841fd8b98f98277fa15d11f2b6490e816ac159dbf8b0814fc189c (image=quay.io/ceph/ceph:v18, name=angry_dewdney, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:43 np0005533938 systemd[1]: libpod-conmon-e6ebb17cae0841fd8b98f98277fa15d11f2b6490e816ac159dbf8b0814fc189c.scope: Deactivated successfully.
Nov 24 13:21:43 np0005533938 systemd[1]: libpod-conmon-9b0c7ded593ad1f47bbc2daa9ffbfbb435342a9c497bf51ec319f22b86fc087c.scope: Deactivated successfully.
Nov 24 13:21:43 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 24 13:21:43 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 24 13:21:43 np0005533938 podman[99580]: 2025-11-24 18:21:43.716336836 +0000 UTC m=+0.047885710 container create dfe9aae0b2691904230297558da3d083f89159c7501ef470ff1f0237191896a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:21:43 np0005533938 systemd[1]: Started libpod-conmon-dfe9aae0b2691904230297558da3d083f89159c7501ef470ff1f0237191896a0.scope.
Nov 24 13:21:43 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:43 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2cbed3599b370eeec8c4e0f56780c050e1a9bcbb3d783d5c824e8c7d8b55305/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:43 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2cbed3599b370eeec8c4e0f56780c050e1a9bcbb3d783d5c824e8c7d8b55305/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:43 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2cbed3599b370eeec8c4e0f56780c050e1a9bcbb3d783d5c824e8c7d8b55305/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:43 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2cbed3599b370eeec8c4e0f56780c050e1a9bcbb3d783d5c824e8c7d8b55305/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:43 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2cbed3599b370eeec8c4e0f56780c050e1a9bcbb3d783d5c824e8c7d8b55305/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:43 np0005533938 podman[99580]: 2025-11-24 18:21:43.694973785 +0000 UTC m=+0.026522719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:43 np0005533938 podman[99580]: 2025-11-24 18:21:43.792394464 +0000 UTC m=+0.123943358 container init dfe9aae0b2691904230297558da3d083f89159c7501ef470ff1f0237191896a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:43 np0005533938 podman[99580]: 2025-11-24 18:21:43.800169358 +0000 UTC m=+0.131718242 container start dfe9aae0b2691904230297558da3d083f89159c7501ef470ff1f0237191896a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_murdock, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:43 np0005533938 podman[99580]: 2025-11-24 18:21:43.803920201 +0000 UTC m=+0.135469085 container attach dfe9aae0b2691904230297558da3d083f89159c7501ef470ff1f0237191896a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:21:43 np0005533938 python3[99613]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:43 np0005533938 podman[99621]: 2025-11-24 18:21:43.936697978 +0000 UTC m=+0.037380580 container create b11f347d6a53010cb1b3143ff38ebb6057e5148b7628c897043a6294f6bbfcf4 (image=quay.io/ceph/ceph:v18, name=jovial_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 13:21:43 np0005533938 systemd[1]: Started libpod-conmon-b11f347d6a53010cb1b3143ff38ebb6057e5148b7628c897043a6294f6bbfcf4.scope.
Nov 24 13:21:43 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:43 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/932e71e10fd8050a5af6e3d91fc5678c868bc60eaf9ab993b773e3d0dc238597/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:43 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/932e71e10fd8050a5af6e3d91fc5678c868bc60eaf9ab993b773e3d0dc238597/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:44 np0005533938 podman[99621]: 2025-11-24 18:21:44.008052129 +0000 UTC m=+0.108734741 container init b11f347d6a53010cb1b3143ff38ebb6057e5148b7628c897043a6294f6bbfcf4 (image=quay.io/ceph/ceph:v18, name=jovial_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 13:21:44 np0005533938 podman[99621]: 2025-11-24 18:21:43.919558622 +0000 UTC m=+0.020241234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:44 np0005533938 podman[99621]: 2025-11-24 18:21:44.020001326 +0000 UTC m=+0.120683938 container start b11f347d6a53010cb1b3143ff38ebb6057e5148b7628c897043a6294f6bbfcf4 (image=quay.io/ceph/ceph:v18, name=jovial_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 13:21:44 np0005533938 podman[99621]: 2025-11-24 18:21:44.023582355 +0000 UTC m=+0.124264967 container attach b11f347d6a53010cb1b3143ff38ebb6057e5148b7628c897043a6294f6bbfcf4 (image=quay.io/ceph/ceph:v18, name=jovial_borg, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 13:21:44 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v100: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972961426s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565208435s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972451210s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.564727783s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972639084s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.564918518s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972542763s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.564819336s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972915649s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565208435s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972589493s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.564918518s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972389221s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.564727783s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972466469s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.564819336s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972216606s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.564704895s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972378731s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.564872742s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972356796s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.564872742s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972193718s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.564704895s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972417831s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.564964294s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972393990s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.564964294s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972384453s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565002441s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972573280s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565185547s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972368240s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565002441s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972553253s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565185547s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972796440s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565521240s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972274780s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565032959s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972774506s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565521240s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972763062s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565551758s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972251892s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565032959s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972743988s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565551758s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972208023s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565055847s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972194672s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565055847s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972286224s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565185547s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972272873s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565185547s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.971922874s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.564880371s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972307205s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565269470s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972374916s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565353394s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.971899986s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.564880371s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972357750s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565353394s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972282410s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565269470s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972621918s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565628052s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972594261s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565628052s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972410202s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565559387s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972494125s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565673828s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972476006s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565673828s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972030640s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565261841s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.973052979s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.566291809s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972017288s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565261841s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.973031044s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.566291809s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972310066s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565605164s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972296715s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565605164s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972310066s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565628052s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972296715s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565628052s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972307205s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565689087s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972433090s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565826416s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972291946s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565689087s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972815514s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.566238403s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972408295s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565826416s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972800255s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.566238403s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972352982s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565834045s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972339630s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565834045s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972348213s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565872192s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972334862s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565872192s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972292900s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565856934s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972325325s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565895081s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972311974s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565895081s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972267151s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565856934s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972247124s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565879822s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972230911s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565879822s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972231865s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565917969s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972186089s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565910339s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972208977s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565917969s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972166061s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565910339s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972162247s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565940857s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972177505s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.565963745s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972281456s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.566093445s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972155571s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565963745s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972265244s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.566093445s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972393036s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565559387s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972136497s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.565940857s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972146988s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.566017151s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972131729s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.566017151s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972361565s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.566284180s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972232819s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.566177368s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972347260s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.566284180s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972199440s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.566184998s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972184181s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.566184998s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972196579s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 68.566215515s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972215652s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.566177368s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=38/42 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45 pruub=11.972173691s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.566215515s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972187996s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 68.566246033s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=41/42 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.972173691s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.566246033s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.959789276s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541221619s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.959757805s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541221619s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=0/0 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.15( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950924873s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.532524109s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.15( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950902939s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.532524109s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.14( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950791359s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.532554626s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.14( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950774193s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.532554626s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.17( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950805664s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.532661438s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.17( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950791359s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.532661438s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950683594s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.532623291s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950666428s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.532623291s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950606346s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.532638550s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950592041s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.532638550s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.11( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950521469s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.532646179s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.987396240s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113769531s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.11( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950506210s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.532646179s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950457573s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.532661438s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.950446129s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.532661438s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.987370491s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113769531s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.795005798s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.921493530s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794991493s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.921493530s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794899940s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.921485901s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794883728s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.921485901s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794740677s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.921501160s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794722557s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.921501160s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794586182s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.921455383s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794571877s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.921455383s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986700058s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113601685s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986670494s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113601685s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794425011s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.921447754s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794410706s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.921447754s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986455917s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113601685s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794167519s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.921363831s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986596107s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113800049s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986581802s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113800049s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794148445s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.921363831s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794034004s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.921386719s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986531258s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113906860s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.794012070s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.921386719s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986501694s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113906860s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.793845177s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.921371460s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.793884277s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.921424866s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.793824196s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.921371460s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.793861389s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.921424866s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986217499s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113868713s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986203194s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113868713s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986048698s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113838196s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986026764s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113838196s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986024857s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113845825s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.986001015s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113845825s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985937119s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113868713s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.793324471s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.921279907s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985917091s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113868713s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.793310165s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.921279907s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.793040276s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.921104431s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985826492s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113883972s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.793024063s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.921104431s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985806465s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113883972s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792888641s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.921096802s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985737801s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113967896s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792870522s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.921096802s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985715866s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113967896s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792678833s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.920997620s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954547882s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541053772s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954516411s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541053772s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.13( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954245567s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.540954590s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.13( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954216957s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.540954590s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954298973s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541084290s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954276085s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541084290s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954187393s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541076660s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954154968s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541076660s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954084396s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541061401s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954063416s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541061401s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954076767s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.540954590s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.c( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954051971s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541114807s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.c( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.954023361s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541114807s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953877449s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.540954590s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953915596s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541107178s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953892708s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541107178s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953944206s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541160583s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953897476s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541160583s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953869820s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541152954s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953824997s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541107178s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953847885s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541152954s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953803062s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541107178s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953815460s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541213989s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953799248s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541213989s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953764915s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541221619s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953743935s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541221619s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953792572s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541343689s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953776360s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541343689s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.6( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953671455s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541275024s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.6( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953650475s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541275024s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953734398s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541397095s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953518867s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541198730s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953451157s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541198730s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953720093s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541397095s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953609467s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541419983s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953585625s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541419983s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953466415s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541366577s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953445435s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541366577s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953682899s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541656494s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953592300s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541656494s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953296661s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541427612s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953273773s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541427612s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953255653s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541473389s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953228951s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541473389s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953012466s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541305542s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.4( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953123093s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541427612s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.4( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953100204s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541427612s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.952962875s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541305542s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953118324s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541503906s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953091621s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541503906s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953083038s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541542053s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953059196s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541542053s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.1e( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.952995300s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541542053s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.953011513s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541564941s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.1f( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.952999115s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541603088s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.1e( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.952971458s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541542053s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.1f( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.952984810s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541603088s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.1c( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.952826500s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541542053s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.1c( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.952803612s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541542053s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.1d( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.952827454s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 82.541595459s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[6.1d( empty local-lis/les=41/42 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.952803612s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541595459s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=11.952991486s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.541564941s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792664528s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.920997620s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985569000s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113922119s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985550880s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113922119s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792537689s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.920959473s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792518616s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.920959473s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985431671s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113929749s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985412598s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113929749s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792406082s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.920959473s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792390823s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.920959473s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985373497s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.114021301s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792285919s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.920936584s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985342979s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.114021301s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.984911919s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113601685s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792072296s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.920852661s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985165596s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113967896s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792122841s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.920936584s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.985146523s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113967896s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792009354s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.920852661s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792015076s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.920944214s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.792001724s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.920944214s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.984887123s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113845825s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.984868050s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113845825s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.984975815s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.114006042s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.984963417s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.114006042s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.791745186s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.920829773s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.791713715s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.920829773s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.984852791s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113990784s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.984839439s) [2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113990784s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=0/0 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.984765053s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113990784s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.984751701s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113990784s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.984699249s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 79.113952637s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=14.984679222s) [0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.113952637s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.788194656s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.917518616s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.788181305s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.917518616s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.788196564s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 73.917587280s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=38/39 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=9.788178444s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.917587280s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=0/0 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 24 13:21:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4001581995' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 13:21:44 np0005533938 jovial_borg[99636]: 
Nov 24 13:21:44 np0005533938 jovial_borg[99636]: {"epoch":1,"fsid":"e5ee928f-099b-569b-93c9-ecf025cbb50d","modified":"2025-11-24T18:18:43.057635Z","created":"2025-11-24T18:18:43.057635Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Nov 24 13:21:44 np0005533938 jovial_borg[99636]: dumped monmap epoch 1
Nov 24 13:21:44 np0005533938 systemd[1]: libpod-b11f347d6a53010cb1b3143ff38ebb6057e5148b7628c897043a6294f6bbfcf4.scope: Deactivated successfully.
Nov 24 13:21:44 np0005533938 podman[99678]: 2025-11-24 18:21:44.692135036 +0000 UTC m=+0.036341713 container died b11f347d6a53010cb1b3143ff38ebb6057e5148b7628c897043a6294f6bbfcf4 (image=quay.io/ceph/ceph:v18, name=jovial_borg, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:21:44 np0005533938 systemd[1]: var-lib-containers-storage-overlay-932e71e10fd8050a5af6e3d91fc5678c868bc60eaf9ab993b773e3d0dc238597-merged.mount: Deactivated successfully.
Nov 24 13:21:44 np0005533938 podman[99678]: 2025-11-24 18:21:44.7289513 +0000 UTC m=+0.073157987 container remove b11f347d6a53010cb1b3143ff38ebb6057e5148b7628c897043a6294f6bbfcf4 (image=quay.io/ceph/ceph:v18, name=jovial_borg, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 13:21:44 np0005533938 systemd[1]: libpod-conmon-b11f347d6a53010cb1b3143ff38ebb6057e5148b7628c897043a6294f6bbfcf4.scope: Deactivated successfully.
Nov 24 13:21:44 np0005533938 vibrant_murdock[99616]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:21:44 np0005533938 vibrant_murdock[99616]: --> relative data size: 1.0
Nov 24 13:21:44 np0005533938 vibrant_murdock[99616]: --> All data devices are unavailable
Nov 24 13:21:44 np0005533938 systemd[1]: libpod-dfe9aae0b2691904230297558da3d083f89159c7501ef470ff1f0237191896a0.scope: Deactivated successfully.
Nov 24 13:21:44 np0005533938 podman[99580]: 2025-11-24 18:21:44.773432915 +0000 UTC m=+1.104981829 container died dfe9aae0b2691904230297558da3d083f89159c7501ef470ff1f0237191896a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:44 np0005533938 systemd[1]: var-lib-containers-storage-overlay-f2cbed3599b370eeec8c4e0f56780c050e1a9bcbb3d783d5c824e8c7d8b55305-merged.mount: Deactivated successfully.
Nov 24 13:21:44 np0005533938 podman[99580]: 2025-11-24 18:21:44.831807364 +0000 UTC m=+1.163356238 container remove dfe9aae0b2691904230297558da3d083f89159c7501ef470ff1f0237191896a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:21:44 np0005533938 systemd[1]: libpod-conmon-dfe9aae0b2691904230297558da3d083f89159c7501ef470ff1f0237191896a0.scope: Deactivated successfully.
Nov 24 13:21:45 np0005533938 python3[99838]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:45 np0005533938 podman[99846]: 2025-11-24 18:21:45.273584814 +0000 UTC m=+0.033596965 container create 8d2c76d2f7f13e19df85254b818e65421d8b25187bc08a500663183f349a15c8 (image=quay.io/ceph/ceph:v18, name=condescending_darwin, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:45 np0005533938 systemd[1]: Started libpod-conmon-8d2c76d2f7f13e19df85254b818e65421d8b25187bc08a500663183f349a15c8.scope.
Nov 24 13:21:45 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:45 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65e9a6dd35ff0be385c9f090f2c1bf378389e502a4a9248aea9b99a735acf7cc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:45 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65e9a6dd35ff0be385c9f090f2c1bf378389e502a4a9248aea9b99a735acf7cc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:45 np0005533938 podman[99846]: 2025-11-24 18:21:45.341352526 +0000 UTC m=+0.101364697 container init 8d2c76d2f7f13e19df85254b818e65421d8b25187bc08a500663183f349a15c8 (image=quay.io/ceph/ceph:v18, name=condescending_darwin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 13:21:45 np0005533938 podman[99846]: 2025-11-24 18:21:45.346438833 +0000 UTC m=+0.106450984 container start 8d2c76d2f7f13e19df85254b818e65421d8b25187bc08a500663183f349a15c8 (image=quay.io/ceph/ceph:v18, name=condescending_darwin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:45 np0005533938 podman[99846]: 2025-11-24 18:21:45.259667668 +0000 UTC m=+0.019679859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:45 np0005533938 podman[99846]: 2025-11-24 18:21:45.472242857 +0000 UTC m=+0.232255008 container attach 8d2c76d2f7f13e19df85254b818e65421d8b25187bc08a500663183f349a15c8 (image=quay.io/ceph/ceph:v18, name=condescending_darwin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:21:45 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Nov 24 13:21:45 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 24 13:21:45 np0005533938 podman[99895]: 2025-11-24 18:21:45.633604803 +0000 UTC m=+0.072180793 container create e57975a9c6266735f58c3db4c92f11dc422ec89fbca6a5b0784686e7bafdec8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:21:45 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 24 13:21:45 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 13:21:45 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 13:21:45 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 13:21:45 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 13:21:45 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 13:21:45 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[3.12( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[7.1b( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[3.1f( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[2.13( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[5.14( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[5.15( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[3.15( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[2.11( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[6.1e( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[4.18( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[4.d( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[6.d( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[4.f( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[4.2( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[6.1( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[4.7( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[4.9( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[4.5( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[6.b( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[4.8( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[6.17( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[4.14( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[4.12( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[4.10( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[6.1d( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[6.1c( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[2.1b( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[2.17( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[5.11( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[4.4( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[2.15( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[5.12( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[5.9( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[5.13( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[5.16( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[2.d( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[2.3( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[2.5( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[5.1d( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[2.4( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[2.6( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[5.f( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[2.a( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[2.7( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[5.c( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[5.1a( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[5.1( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[5.18( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[5.19( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 46 pg[2.9( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[7.13( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[3.17( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[2.16( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[3.9( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[2.8( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[3.a( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[2.b( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[7.f( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[7.3( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[5.3( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[3.6( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[5.2( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[3.3( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[2.1f( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[5.5( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[2.2( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[2.f( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[2.1c( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[7.6( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[7.9( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[7.18( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[5.4( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[3.c( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[7.4( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[3.f( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[7.1f( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[3.1b( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[2.18( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[2.19( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[5.1e( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[2.1d( empty local-lis/les=45/46 n=0 ec=38/21 lis/c=38/38 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[3.1( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 46 pg[5.7( empty local-lis/les=45/46 n=0 ec=41/26 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[6.f( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[4.e( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[4.1( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[4.1b( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[4.a( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[6.8( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[6.15( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[6.11( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[6.14( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[4.11( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[6.13( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[4.1a( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[6.1f( empty local-lis/les=45/46 n=0 ec=41/27 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[4.1c( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[3.18( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[7.1c( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[4.13( empty local-lis/les=45/46 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=45) [2] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[7.11( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[3.11( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[3.e( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[7.8( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[7.a( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[7.2( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[3.16( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[7.15( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[7.5( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[7.1( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[7.c( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[3.8( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[7.e( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[3.7( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[3.1d( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[7.1a( empty local-lis/les=45/46 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[3.1e( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 46 pg[3.5( empty local-lis/les=45/46 n=0 ec=38/23 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:45 np0005533938 systemd[1]: Started libpod-conmon-e57975a9c6266735f58c3db4c92f11dc422ec89fbca6a5b0784686e7bafdec8b.scope.
Nov 24 13:21:45 np0005533938 podman[99895]: 2025-11-24 18:21:45.584975536 +0000 UTC m=+0.023551546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:45 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:45 np0005533938 podman[99895]: 2025-11-24 18:21:45.706699198 +0000 UTC m=+0.145275228 container init e57975a9c6266735f58c3db4c92f11dc422ec89fbca6a5b0784686e7bafdec8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 13:21:45 np0005533938 podman[99895]: 2025-11-24 18:21:45.712730518 +0000 UTC m=+0.151306508 container start e57975a9c6266735f58c3db4c92f11dc422ec89fbca6a5b0784686e7bafdec8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:21:45 np0005533938 charming_lamport[99929]: 167 167
Nov 24 13:21:45 np0005533938 systemd[1]: libpod-e57975a9c6266735f58c3db4c92f11dc422ec89fbca6a5b0784686e7bafdec8b.scope: Deactivated successfully.
Nov 24 13:21:45 np0005533938 podman[99895]: 2025-11-24 18:21:45.716066041 +0000 UTC m=+0.154642061 container attach e57975a9c6266735f58c3db4c92f11dc422ec89fbca6a5b0784686e7bafdec8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 13:21:45 np0005533938 podman[99895]: 2025-11-24 18:21:45.716873401 +0000 UTC m=+0.155449451 container died e57975a9c6266735f58c3db4c92f11dc422ec89fbca6a5b0784686e7bafdec8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 13:21:45 np0005533938 systemd[1]: var-lib-containers-storage-overlay-0e159f7e3adf03a7579e7c9edfe1603c95fdcd834f91ec8a377e6991c4bc47bc-merged.mount: Deactivated successfully.
Nov 24 13:21:45 np0005533938 podman[99895]: 2025-11-24 18:21:45.762319939 +0000 UTC m=+0.200895929 container remove e57975a9c6266735f58c3db4c92f11dc422ec89fbca6a5b0784686e7bafdec8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 13:21:45 np0005533938 systemd[1]: libpod-conmon-e57975a9c6266735f58c3db4c92f11dc422ec89fbca6a5b0784686e7bafdec8b.scope: Deactivated successfully.
Nov 24 13:21:45 np0005533938 podman[99956]: 2025-11-24 18:21:45.935736975 +0000 UTC m=+0.035507342 container create d9a02db99c7ae69374fc6b6d32e66546fa371c94a2cf7b6473a3626da3d29615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:21:45 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Nov 24 13:21:45 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3247829842' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 24 13:21:45 np0005533938 condescending_darwin[99877]: [client.openstack]
Nov 24 13:21:45 np0005533938 condescending_darwin[99877]: #011key = AQBqoSRpAAAAABAAwYZz6MMXWB3V3iQXlmOz0w==
Nov 24 13:21:45 np0005533938 condescending_darwin[99877]: #011caps mgr = "allow *"
Nov 24 13:21:45 np0005533938 condescending_darwin[99877]: #011caps mon = "profile rbd"
Nov 24 13:21:45 np0005533938 condescending_darwin[99877]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 24 13:21:45 np0005533938 podman[99846]: 2025-11-24 18:21:45.976566139 +0000 UTC m=+0.736578310 container died 8d2c76d2f7f13e19df85254b818e65421d8b25187bc08a500663183f349a15c8 (image=quay.io/ceph/ceph:v18, name=condescending_darwin, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:45 np0005533938 systemd[1]: Started libpod-conmon-d9a02db99c7ae69374fc6b6d32e66546fa371c94a2cf7b6473a3626da3d29615.scope.
Nov 24 13:21:45 np0005533938 systemd[1]: libpod-8d2c76d2f7f13e19df85254b818e65421d8b25187bc08a500663183f349a15c8.scope: Deactivated successfully.
Nov 24 13:21:46 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:46 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1803449fb1e77601f38a780de5368f45366c74961a192241c48a505bc4c336/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:46 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1803449fb1e77601f38a780de5368f45366c74961a192241c48a505bc4c336/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:46 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1803449fb1e77601f38a780de5368f45366c74961a192241c48a505bc4c336/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:46 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1803449fb1e77601f38a780de5368f45366c74961a192241c48a505bc4c336/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:46 np0005533938 podman[99956]: 2025-11-24 18:21:45.920249591 +0000 UTC m=+0.020019978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:46 np0005533938 systemd[1]: var-lib-containers-storage-overlay-65e9a6dd35ff0be385c9f090f2c1bf378389e502a4a9248aea9b99a735acf7cc-merged.mount: Deactivated successfully.
Nov 24 13:21:46 np0005533938 podman[99956]: 2025-11-24 18:21:46.021481155 +0000 UTC m=+0.121251522 container init d9a02db99c7ae69374fc6b6d32e66546fa371c94a2cf7b6473a3626da3d29615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 13:21:46 np0005533938 podman[99956]: 2025-11-24 18:21:46.027671277 +0000 UTC m=+0.127441644 container start d9a02db99c7ae69374fc6b6d32e66546fa371c94a2cf7b6473a3626da3d29615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 24 13:21:46 np0005533938 podman[99956]: 2025-11-24 18:21:46.032389564 +0000 UTC m=+0.132159961 container attach d9a02db99c7ae69374fc6b6d32e66546fa371c94a2cf7b6473a3626da3d29615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 13:21:46 np0005533938 podman[99846]: 2025-11-24 18:21:46.037289246 +0000 UTC m=+0.797301397 container remove 8d2c76d2f7f13e19df85254b818e65421d8b25187bc08a500663183f349a15c8 (image=quay.io/ceph/ceph:v18, name=condescending_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 24 13:21:46 np0005533938 systemd[1]: libpod-conmon-8d2c76d2f7f13e19df85254b818e65421d8b25187bc08a500663183f349a15c8.scope: Deactivated successfully.
Nov 24 13:21:46 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:46 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/3247829842' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 24 13:21:46 np0005533938 practical_gates[99975]: {
Nov 24 13:21:46 np0005533938 practical_gates[99975]:    "0": [
Nov 24 13:21:46 np0005533938 practical_gates[99975]:        {
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "devices": [
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "/dev/loop3"
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            ],
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "lv_name": "ceph_lv0",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "lv_size": "21470642176",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "name": "ceph_lv0",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "tags": {
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.cluster_name": "ceph",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.crush_device_class": "",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.encrypted": "0",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.osd_id": "0",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.type": "block",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.vdo": "0"
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            },
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "type": "block",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "vg_name": "ceph_vg0"
Nov 24 13:21:46 np0005533938 practical_gates[99975]:        }
Nov 24 13:21:46 np0005533938 practical_gates[99975]:    ],
Nov 24 13:21:46 np0005533938 practical_gates[99975]:    "1": [
Nov 24 13:21:46 np0005533938 practical_gates[99975]:        {
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "devices": [
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "/dev/loop4"
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            ],
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "lv_name": "ceph_lv1",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "lv_size": "21470642176",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "name": "ceph_lv1",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "tags": {
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.cluster_name": "ceph",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.crush_device_class": "",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.encrypted": "0",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.osd_id": "1",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.type": "block",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.vdo": "0"
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            },
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "type": "block",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "vg_name": "ceph_vg1"
Nov 24 13:21:46 np0005533938 practical_gates[99975]:        }
Nov 24 13:21:46 np0005533938 practical_gates[99975]:    ],
Nov 24 13:21:46 np0005533938 practical_gates[99975]:    "2": [
Nov 24 13:21:46 np0005533938 practical_gates[99975]:        {
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "devices": [
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "/dev/loop5"
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            ],
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "lv_name": "ceph_lv2",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "lv_size": "21470642176",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "name": "ceph_lv2",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "tags": {
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.cluster_name": "ceph",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.crush_device_class": "",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.encrypted": "0",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.osd_id": "2",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.type": "block",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:                "ceph.vdo": "0"
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            },
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "type": "block",
Nov 24 13:21:46 np0005533938 practical_gates[99975]:            "vg_name": "ceph_vg2"
Nov 24 13:21:46 np0005533938 practical_gates[99975]:        }
Nov 24 13:21:46 np0005533938 practical_gates[99975]:    ]
Nov 24 13:21:46 np0005533938 practical_gates[99975]: }
Nov 24 13:21:46 np0005533938 systemd[1]: libpod-d9a02db99c7ae69374fc6b6d32e66546fa371c94a2cf7b6473a3626da3d29615.scope: Deactivated successfully.
Nov 24 13:21:46 np0005533938 podman[99956]: 2025-11-24 18:21:46.768303318 +0000 UTC m=+0.868073685 container died d9a02db99c7ae69374fc6b6d32e66546fa371c94a2cf7b6473a3626da3d29615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Nov 24 13:21:46 np0005533938 systemd[1]: var-lib-containers-storage-overlay-fc1803449fb1e77601f38a780de5368f45366c74961a192241c48a505bc4c336-merged.mount: Deactivated successfully.
Nov 24 13:21:46 np0005533938 podman[99956]: 2025-11-24 18:21:46.819026267 +0000 UTC m=+0.918796634 container remove d9a02db99c7ae69374fc6b6d32e66546fa371c94a2cf7b6473a3626da3d29615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 13:21:46 np0005533938 systemd[1]: libpod-conmon-d9a02db99c7ae69374fc6b6d32e66546fa371c94a2cf7b6473a3626da3d29615.scope: Deactivated successfully.
Nov 24 13:21:47 np0005533938 podman[100297]: 2025-11-24 18:21:47.370734196 +0000 UTC m=+0.035389189 container create 85d9a90a9127a81680a12d2a0d4eb3b66e1aa3ba6063f64b23fc9ac0c308a769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lamarr, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:47 np0005533938 systemd[1]: Started libpod-conmon-85d9a90a9127a81680a12d2a0d4eb3b66e1aa3ba6063f64b23fc9ac0c308a769.scope.
Nov 24 13:21:47 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:47 np0005533938 podman[100297]: 2025-11-24 18:21:47.439217977 +0000 UTC m=+0.103873010 container init 85d9a90a9127a81680a12d2a0d4eb3b66e1aa3ba6063f64b23fc9ac0c308a769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lamarr, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:21:47 np0005533938 podman[100297]: 2025-11-24 18:21:47.445283578 +0000 UTC m=+0.109938581 container start 85d9a90a9127a81680a12d2a0d4eb3b66e1aa3ba6063f64b23fc9ac0c308a769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lamarr, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 13:21:47 np0005533938 podman[100297]: 2025-11-24 18:21:47.447964484 +0000 UTC m=+0.112619507 container attach 85d9a90a9127a81680a12d2a0d4eb3b66e1aa3ba6063f64b23fc9ac0c308a769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 13:21:47 np0005533938 hungry_lamarr[100314]: 167 167
Nov 24 13:21:47 np0005533938 systemd[1]: libpod-85d9a90a9127a81680a12d2a0d4eb3b66e1aa3ba6063f64b23fc9ac0c308a769.scope: Deactivated successfully.
Nov 24 13:21:47 np0005533938 podman[100297]: 2025-11-24 18:21:47.449707367 +0000 UTC m=+0.114362380 container died 85d9a90a9127a81680a12d2a0d4eb3b66e1aa3ba6063f64b23fc9ac0c308a769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lamarr, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:21:47 np0005533938 ansible-async_wrapper.py[100296]: Invoked with j78697275907 30 /home/zuul/.ansible/tmp/ansible-tmp-1764008507.023095-36964-273350074012948/AnsiballZ_command.py _
Nov 24 13:21:47 np0005533938 podman[100297]: 2025-11-24 18:21:47.354885983 +0000 UTC m=+0.019541006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:47 np0005533938 ansible-async_wrapper.py[100322]: Starting module and watcher
Nov 24 13:21:47 np0005533938 ansible-async_wrapper.py[100322]: Start watching 100324 (30)
Nov 24 13:21:47 np0005533938 ansible-async_wrapper.py[100324]: Start module (100324)
Nov 24 13:21:47 np0005533938 ansible-async_wrapper.py[100296]: Return async_wrapper task started.
Nov 24 13:21:47 np0005533938 systemd[1]: var-lib-containers-storage-overlay-fbc45122625b33e6d365e358de528f9ef35d0a56ae62bb68bffc8923ba71ff01-merged.mount: Deactivated successfully.
Nov 24 13:21:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:21:47 np0005533938 podman[100297]: 2025-11-24 18:21:47.483731472 +0000 UTC m=+0.148386465 container remove 85d9a90a9127a81680a12d2a0d4eb3b66e1aa3ba6063f64b23fc9ac0c308a769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:21:47 np0005533938 systemd[1]: libpod-conmon-85d9a90a9127a81680a12d2a0d4eb3b66e1aa3ba6063f64b23fc9ac0c308a769.scope: Deactivated successfully.
Nov 24 13:21:47 np0005533938 python3[100330]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:47 np0005533938 podman[100344]: 2025-11-24 18:21:47.624342274 +0000 UTC m=+0.041569804 container create de926f0f5d83a5d743458f93c5bdd7eb8c8b61c2b1d55c7652d41f2c29b8d317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 13:21:47 np0005533938 podman[100355]: 2025-11-24 18:21:47.650188225 +0000 UTC m=+0.040962668 container create 7c4ad23304836a36bdf83f720051f6be07e0036addd639f9ec1475b5ecfb4167 (image=quay.io/ceph/ceph:v18, name=happy_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 24 13:21:47 np0005533938 systemd[1]: Started libpod-conmon-de926f0f5d83a5d743458f93c5bdd7eb8c8b61c2b1d55c7652d41f2c29b8d317.scope.
Nov 24 13:21:47 np0005533938 systemd[1]: Started libpod-conmon-7c4ad23304836a36bdf83f720051f6be07e0036addd639f9ec1475b5ecfb4167.scope.
Nov 24 13:21:47 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5464985d7741a148a99a1b21b70c0bae3d0306d069c4f78217f62073ded689d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5464985d7741a148a99a1b21b70c0bae3d0306d069c4f78217f62073ded689d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5464985d7741a148a99a1b21b70c0bae3d0306d069c4f78217f62073ded689d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5464985d7741a148a99a1b21b70c0bae3d0306d069c4f78217f62073ded689d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:47 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa43e2464b7e98d43837078167d08359ab9deb2a56c6a39b6838f52093dcbb7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aa43e2464b7e98d43837078167d08359ab9deb2a56c6a39b6838f52093dcbb7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:47 np0005533938 podman[100344]: 2025-11-24 18:21:47.695509821 +0000 UTC m=+0.112737371 container init de926f0f5d83a5d743458f93c5bdd7eb8c8b61c2b1d55c7652d41f2c29b8d317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:21:47 np0005533938 podman[100355]: 2025-11-24 18:21:47.700409472 +0000 UTC m=+0.091183935 container init 7c4ad23304836a36bdf83f720051f6be07e0036addd639f9ec1475b5ecfb4167 (image=quay.io/ceph/ceph:v18, name=happy_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:47 np0005533938 podman[100344]: 2025-11-24 18:21:47.704841212 +0000 UTC m=+0.122068742 container start de926f0f5d83a5d743458f93c5bdd7eb8c8b61c2b1d55c7652d41f2c29b8d317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mccarthy, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 24 13:21:47 np0005533938 podman[100344]: 2025-11-24 18:21:47.609452724 +0000 UTC m=+0.026680274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:47 np0005533938 podman[100355]: 2025-11-24 18:21:47.708639477 +0000 UTC m=+0.099413920 container start 7c4ad23304836a36bdf83f720051f6be07e0036addd639f9ec1475b5ecfb4167 (image=quay.io/ceph/ceph:v18, name=happy_volhard, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 13:21:47 np0005533938 podman[100344]: 2025-11-24 18:21:47.709517499 +0000 UTC m=+0.126745039 container attach de926f0f5d83a5d743458f93c5bdd7eb8c8b61c2b1d55c7652d41f2c29b8d317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 13:21:47 np0005533938 podman[100355]: 2025-11-24 18:21:47.712498753 +0000 UTC m=+0.103273236 container attach 7c4ad23304836a36bdf83f720051f6be07e0036addd639f9ec1475b5ecfb4167 (image=quay.io/ceph/ceph:v18, name=happy_volhard, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 13:21:47 np0005533938 podman[100355]: 2025-11-24 18:21:47.63226534 +0000 UTC m=+0.023039783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:48 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 13:21:48 np0005533938 happy_volhard[100379]: 
Nov 24 13:21:48 np0005533938 happy_volhard[100379]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 24 13:21:48 np0005533938 systemd[1]: libpod-7c4ad23304836a36bdf83f720051f6be07e0036addd639f9ec1475b5ecfb4167.scope: Deactivated successfully.
Nov 24 13:21:48 np0005533938 podman[100406]: 2025-11-24 18:21:48.327169495 +0000 UTC m=+0.022856788 container died 7c4ad23304836a36bdf83f720051f6be07e0036addd639f9ec1475b5ecfb4167 (image=quay.io/ceph/ceph:v18, name=happy_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:48 np0005533938 systemd[1]: var-lib-containers-storage-overlay-3aa43e2464b7e98d43837078167d08359ab9deb2a56c6a39b6838f52093dcbb7-merged.mount: Deactivated successfully.
Nov 24 13:21:48 np0005533938 podman[100406]: 2025-11-24 18:21:48.366482432 +0000 UTC m=+0.062169695 container remove 7c4ad23304836a36bdf83f720051f6be07e0036addd639f9ec1475b5ecfb4167 (image=quay.io/ceph/ceph:v18, name=happy_volhard, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 13:21:48 np0005533938 systemd[1]: libpod-conmon-7c4ad23304836a36bdf83f720051f6be07e0036addd639f9ec1475b5ecfb4167.scope: Deactivated successfully.
Nov 24 13:21:48 np0005533938 ansible-async_wrapper.py[100324]: Module complete (100324)
Nov 24 13:21:48 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:48 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 24 13:21:48 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 24 13:21:48 np0005533938 frosty_mccarthy[100374]: {
Nov 24 13:21:48 np0005533938 frosty_mccarthy[100374]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:21:48 np0005533938 frosty_mccarthy[100374]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:21:48 np0005533938 frosty_mccarthy[100374]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:21:48 np0005533938 frosty_mccarthy[100374]:        "osd_id": 0,
Nov 24 13:21:48 np0005533938 frosty_mccarthy[100374]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:21:48 np0005533938 frosty_mccarthy[100374]:        "type": "bluestore"
Nov 24 13:21:48 np0005533938 frosty_mccarthy[100374]:    },
Nov 24 13:21:48 np0005533938 frosty_mccarthy[100374]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:21:48 np0005533938 frosty_mccarthy[100374]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:21:48 np0005533938 frosty_mccarthy[100374]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:21:48 np0005533938 frosty_mccarthy[100374]:        "osd_id": 1,
Nov 24 13:21:48 np0005533938 frosty_mccarthy[100374]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:21:48 np0005533938 frosty_mccarthy[100374]:        "type": "bluestore"
Nov 24 13:21:48 np0005533938 frosty_mccarthy[100374]:    },
Nov 24 13:21:48 np0005533938 frosty_mccarthy[100374]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:21:48 np0005533938 frosty_mccarthy[100374]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:21:48 np0005533938 frosty_mccarthy[100374]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:21:48 np0005533938 frosty_mccarthy[100374]:        "osd_id": 2,
Nov 24 13:21:48 np0005533938 frosty_mccarthy[100374]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:21:48 np0005533938 frosty_mccarthy[100374]:        "type": "bluestore"
Nov 24 13:21:48 np0005533938 frosty_mccarthy[100374]:    }
Nov 24 13:21:48 np0005533938 frosty_mccarthy[100374]: }
Nov 24 13:21:48 np0005533938 systemd[1]: libpod-de926f0f5d83a5d743458f93c5bdd7eb8c8b61c2b1d55c7652d41f2c29b8d317.scope: Deactivated successfully.
Nov 24 13:21:48 np0005533938 podman[100344]: 2025-11-24 18:21:48.652167285 +0000 UTC m=+1.069394825 container died de926f0f5d83a5d743458f93c5bdd7eb8c8b61c2b1d55c7652d41f2c29b8d317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 13:21:48 np0005533938 systemd[1]: var-lib-containers-storage-overlay-5464985d7741a148a99a1b21b70c0bae3d0306d069c4f78217f62073ded689d8-merged.mount: Deactivated successfully.
Nov 24 13:21:48 np0005533938 podman[100344]: 2025-11-24 18:21:48.717724163 +0000 UTC m=+1.134951703 container remove de926f0f5d83a5d743458f93c5bdd7eb8c8b61c2b1d55c7652d41f2c29b8d317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mccarthy, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:21:48 np0005533938 systemd[1]: libpod-conmon-de926f0f5d83a5d743458f93c5bdd7eb8c8b61c2b1d55c7652d41f2c29b8d317.scope: Deactivated successfully.
Nov 24 13:21:48 np0005533938 python3[100497]: ansible-ansible.legacy.async_status Invoked with jid=j78697275907.100296 mode=status _async_dir=/root/.ansible_async
Nov 24 13:21:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:21:48 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:21:48 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:48 np0005533938 ceph-mgr[75218]: [progress INFO root] update: starting ev 78a7e1ea-35cf-45e0-ae4c-ab983836de74 (Updating rgw.rgw deployment (+1 -> 1))
Nov 24 13:21:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pecquu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 24 13:21:48 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pecquu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 13:21:48 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pecquu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 13:21:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 24 13:21:48 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:21:48 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:21:48 np0005533938 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.pecquu on compute-0
Nov 24 13:21:48 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.pecquu on compute-0
Nov 24 13:21:49 np0005533938 python3[100605]: ansible-ansible.legacy.async_status Invoked with jid=j78697275907.100296 mode=cleanup _async_dir=/root/.ansible_async
Nov 24 13:21:49 np0005533938 podman[100698]: 2025-11-24 18:21:49.255163238 +0000 UTC m=+0.032818886 container create 445ea5b96657481ee98bae2f5bb56876a11b91de3974161e166d4aea21d162ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclean, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:21:49 np0005533938 systemd[1]: Started libpod-conmon-445ea5b96657481ee98bae2f5bb56876a11b91de3974161e166d4aea21d162ad.scope.
Nov 24 13:21:49 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:49 np0005533938 podman[100698]: 2025-11-24 18:21:49.325562326 +0000 UTC m=+0.103217974 container init 445ea5b96657481ee98bae2f5bb56876a11b91de3974161e166d4aea21d162ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:49 np0005533938 podman[100698]: 2025-11-24 18:21:49.333933884 +0000 UTC m=+0.111589532 container start 445ea5b96657481ee98bae2f5bb56876a11b91de3974161e166d4aea21d162ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:49 np0005533938 podman[100698]: 2025-11-24 18:21:49.336755914 +0000 UTC m=+0.114411592 container attach 445ea5b96657481ee98bae2f5bb56876a11b91de3974161e166d4aea21d162ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 13:21:49 np0005533938 podman[100698]: 2025-11-24 18:21:49.239982791 +0000 UTC m=+0.017638459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:49 np0005533938 serene_mclean[100714]: 167 167
Nov 24 13:21:49 np0005533938 systemd[1]: libpod-445ea5b96657481ee98bae2f5bb56876a11b91de3974161e166d4aea21d162ad.scope: Deactivated successfully.
Nov 24 13:21:49 np0005533938 podman[100698]: 2025-11-24 18:21:49.337961144 +0000 UTC m=+0.115616792 container died 445ea5b96657481ee98bae2f5bb56876a11b91de3974161e166d4aea21d162ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 13:21:49 np0005533938 systemd[1]: var-lib-containers-storage-overlay-6e90de0b54ef416e72bf5be05ca61a28c800c06f88efacedd5b6eb6edc77116c-merged.mount: Deactivated successfully.
Nov 24 13:21:49 np0005533938 podman[100698]: 2025-11-24 18:21:49.368518773 +0000 UTC m=+0.146174421 container remove 445ea5b96657481ee98bae2f5bb56876a11b91de3974161e166d4aea21d162ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 13:21:49 np0005533938 systemd[1]: libpod-conmon-445ea5b96657481ee98bae2f5bb56876a11b91de3974161e166d4aea21d162ad.scope: Deactivated successfully.
Nov 24 13:21:49 np0005533938 systemd[1]: Reloading.
Nov 24 13:21:49 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:21:49 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:21:49 np0005533938 systemd[1]: Reloading.
Nov 24 13:21:49 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:21:49 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:49 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:49 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pecquu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 24 13:21:49 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pecquu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 24 13:21:49 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:49 np0005533938 ceph-mon[74927]: Deploying daemon rgw.rgw.compute-0.pecquu on compute-0
Nov 24 13:21:49 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:21:49 np0005533938 python3[100797]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:49 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.3 deep-scrub starts
Nov 24 13:21:49 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.3 deep-scrub ok
Nov 24 13:21:49 np0005533938 podman[100836]: 2025-11-24 18:21:49.905934916 +0000 UTC m=+0.073348152 container create 35d6dcb2c6351cf287f99811ac223bed71e7a31897f8a0cfb3c659fae3efda13 (image=quay.io/ceph/ceph:v18, name=suspicious_turing, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:49 np0005533938 systemd[1]: Started libpod-conmon-35d6dcb2c6351cf287f99811ac223bed71e7a31897f8a0cfb3c659fae3efda13.scope.
Nov 24 13:21:49 np0005533938 podman[100836]: 2025-11-24 18:21:49.880195037 +0000 UTC m=+0.047608363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:49 np0005533938 systemd[1]: Starting Ceph rgw.rgw.compute-0.pecquu for e5ee928f-099b-569b-93c9-ecf025cbb50d...
Nov 24 13:21:49 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:49 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1c5f4dab92447616223da4eef716e8594c40d29920d4d6e5a79a69adc4d3220/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:49 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1c5f4dab92447616223da4eef716e8594c40d29920d4d6e5a79a69adc4d3220/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:50 np0005533938 podman[100836]: 2025-11-24 18:21:50.001867048 +0000 UTC m=+0.169280304 container init 35d6dcb2c6351cf287f99811ac223bed71e7a31897f8a0cfb3c659fae3efda13 (image=quay.io/ceph/ceph:v18, name=suspicious_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 13:21:50 np0005533938 podman[100836]: 2025-11-24 18:21:50.011507558 +0000 UTC m=+0.178920784 container start 35d6dcb2c6351cf287f99811ac223bed71e7a31897f8a0cfb3c659fae3efda13 (image=quay.io/ceph/ceph:v18, name=suspicious_turing, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:50 np0005533938 podman[100836]: 2025-11-24 18:21:50.014697097 +0000 UTC m=+0.182110353 container attach 35d6dcb2c6351cf287f99811ac223bed71e7a31897f8a0cfb3c659fae3efda13 (image=quay.io/ceph/ceph:v18, name=suspicious_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 13:21:50 np0005533938 podman[100903]: 2025-11-24 18:21:50.195796164 +0000 UTC m=+0.042999959 container create 1905e5a7aafe9faa6120b9302738e1e90e777a8e8f941c8c0f2564ce6b43ff73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-rgw-rgw-compute-0-pecquu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 13:21:50 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32568201311f79804fc9dfcbfcdf0d65b67bcbe2fae68cc844380ade5762c297/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:50 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32568201311f79804fc9dfcbfcdf0d65b67bcbe2fae68cc844380ade5762c297/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:50 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32568201311f79804fc9dfcbfcdf0d65b67bcbe2fae68cc844380ade5762c297/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:50 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32568201311f79804fc9dfcbfcdf0d65b67bcbe2fae68cc844380ade5762c297/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.pecquu supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:50 np0005533938 podman[100903]: 2025-11-24 18:21:50.258638214 +0000 UTC m=+0.105842029 container init 1905e5a7aafe9faa6120b9302738e1e90e777a8e8f941c8c0f2564ce6b43ff73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-rgw-rgw-compute-0-pecquu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:50 np0005533938 podman[100903]: 2025-11-24 18:21:50.262862229 +0000 UTC m=+0.110066014 container start 1905e5a7aafe9faa6120b9302738e1e90e777a8e8f941c8c0f2564ce6b43ff73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-rgw-rgw-compute-0-pecquu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:50 np0005533938 bash[100903]: 1905e5a7aafe9faa6120b9302738e1e90e777a8e8f941c8c0f2564ce6b43ff73
Nov 24 13:21:50 np0005533938 podman[100903]: 2025-11-24 18:21:50.178637008 +0000 UTC m=+0.025840803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:50 np0005533938 systemd[1]: Started Ceph rgw.rgw.compute-0.pecquu for e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 13:21:50 np0005533938 radosgw[100923]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 24 13:21:50 np0005533938 radosgw[100923]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 24 13:21:50 np0005533938 radosgw[100923]: framework: beast
Nov 24 13:21:50 np0005533938 radosgw[100923]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 24 13:21:50 np0005533938 radosgw[100923]: init_numa not setting numa affinity
Nov 24 13:21:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:21:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:21:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 24 13:21:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:50 np0005533938 ceph-mgr[75218]: [progress INFO root] complete: finished ev 78a7e1ea-35cf-45e0-ae4c-ab983836de74 (Updating rgw.rgw deployment (+1 -> 1))
Nov 24 13:21:50 np0005533938 ceph-mgr[75218]: [progress INFO root] Completed event 78a7e1ea-35cf-45e0-ae4c-ab983836de74 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Nov 24 13:21:50 np0005533938 ceph-mgr[75218]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Nov 24 13:21:50 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 24 13:21:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 24 13:21:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 24 13:21:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:50 np0005533938 ceph-mgr[75218]: [progress INFO root] update: starting ev a0179e9b-54c5-4ae3-8845-23c41a2c2c19 (Updating mds.cephfs deployment (+1 -> 1))
Nov 24 13:21:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.apnhwb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 24 13:21:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.apnhwb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 24 13:21:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.apnhwb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 24 13:21:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:21:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:21:50 np0005533938 ceph-mgr[75218]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.apnhwb on compute-0
Nov 24 13:21:50 np0005533938 ceph-mgr[75218]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.apnhwb on compute-0
Nov 24 13:21:50 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:50 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14265 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 13:21:50 np0005533938 suspicious_turing[100854]: 
Nov 24 13:21:50 np0005533938 suspicious_turing[100854]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 24 13:21:50 np0005533938 systemd[1]: libpod-35d6dcb2c6351cf287f99811ac223bed71e7a31897f8a0cfb3c659fae3efda13.scope: Deactivated successfully.
Nov 24 13:21:50 np0005533938 podman[100836]: 2025-11-24 18:21:50.581408619 +0000 UTC m=+0.748821845 container died 35d6dcb2c6351cf287f99811ac223bed71e7a31897f8a0cfb3c659fae3efda13 (image=quay.io/ceph/ceph:v18, name=suspicious_turing, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:50 np0005533938 systemd[1]: var-lib-containers-storage-overlay-f1c5f4dab92447616223da4eef716e8594c40d29920d4d6e5a79a69adc4d3220-merged.mount: Deactivated successfully.
Nov 24 13:21:50 np0005533938 podman[100836]: 2025-11-24 18:21:50.622321265 +0000 UTC m=+0.789734491 container remove 35d6dcb2c6351cf287f99811ac223bed71e7a31897f8a0cfb3c659fae3efda13 (image=quay.io/ceph/ceph:v18, name=suspicious_turing, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:21:50 np0005533938 systemd[1]: libpod-conmon-35d6dcb2c6351cf287f99811ac223bed71e7a31897f8a0cfb3c659fae3efda13.scope: Deactivated successfully.
Nov 24 13:21:50 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:50 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:50 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:50 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:50 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:50 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.apnhwb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 24 13:21:50 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.apnhwb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 24 13:21:50 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Nov 24 13:21:50 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Nov 24 13:21:51 np0005533938 podman[101156]: 2025-11-24 18:21:51.024801538 +0000 UTC m=+0.059949729 container create c41ede5d63c43376f8db1e2032195f1508dfec642cb9d7f6f41c0c15a417e93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:51 np0005533938 systemd[1]: Started libpod-conmon-c41ede5d63c43376f8db1e2032195f1508dfec642cb9d7f6f41c0c15a417e93f.scope.
Nov 24 13:21:51 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:51 np0005533938 podman[101156]: 2025-11-24 18:21:50.998077865 +0000 UTC m=+0.033226146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:51 np0005533938 podman[101156]: 2025-11-24 18:21:51.102919818 +0000 UTC m=+0.138068009 container init c41ede5d63c43376f8db1e2032195f1508dfec642cb9d7f6f41c0c15a417e93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 24 13:21:51 np0005533938 podman[101156]: 2025-11-24 18:21:51.109284966 +0000 UTC m=+0.144433157 container start c41ede5d63c43376f8db1e2032195f1508dfec642cb9d7f6f41c0c15a417e93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:21:51 np0005533938 podman[101156]: 2025-11-24 18:21:51.113201814 +0000 UTC m=+0.148350005 container attach c41ede5d63c43376f8db1e2032195f1508dfec642cb9d7f6f41c0c15a417e93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 13:21:51 np0005533938 xenodochial_almeida[101173]: 167 167
Nov 24 13:21:51 np0005533938 systemd[1]: libpod-c41ede5d63c43376f8db1e2032195f1508dfec642cb9d7f6f41c0c15a417e93f.scope: Deactivated successfully.
Nov 24 13:21:51 np0005533938 podman[101156]: 2025-11-24 18:21:51.117686915 +0000 UTC m=+0.152835116 container died c41ede5d63c43376f8db1e2032195f1508dfec642cb9d7f6f41c0c15a417e93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:51 np0005533938 systemd[1]: var-lib-containers-storage-overlay-c8f6fb1bd22ae4968c924723e89c9ab8094c4b4d99b4e0c761b5b56f6f44f2d5-merged.mount: Deactivated successfully.
Nov 24 13:21:51 np0005533938 podman[101156]: 2025-11-24 18:21:51.15414874 +0000 UTC m=+0.189296931 container remove c41ede5d63c43376f8db1e2032195f1508dfec642cb9d7f6f41c0c15a417e93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 13:21:51 np0005533938 systemd[1]: libpod-conmon-c41ede5d63c43376f8db1e2032195f1508dfec642cb9d7f6f41c0c15a417e93f.scope: Deactivated successfully.
Nov 24 13:21:51 np0005533938 systemd[1]: Reloading.
Nov 24 13:21:51 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:21:51 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:21:51 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 24 13:21:51 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 24 13:21:51 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 24 13:21:51 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Nov 24 13:21:51 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2803639548' entity='client.rgw.rgw.compute-0.pecquu' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 24 13:21:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 47 pg[8.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:51 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 24 13:21:51 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 24 13:21:51 np0005533938 systemd[1]: Reloading.
Nov 24 13:21:51 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:21:51 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:21:51 np0005533938 python3[101254]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:51 np0005533938 podman[101293]: 2025-11-24 18:21:51.637197035 +0000 UTC m=+0.034756654 container create 07e5d1071f85e00cdb0cb865a3d3855c3a701ba2ffce2bcd214fa8654eaa832b (image=quay.io/ceph/ceph:v18, name=hardcore_curran, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:51 np0005533938 podman[101293]: 2025-11-24 18:21:51.623438393 +0000 UTC m=+0.020998032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:51 np0005533938 systemd[1]: Started libpod-conmon-07e5d1071f85e00cdb0cb865a3d3855c3a701ba2ffce2bcd214fa8654eaa832b.scope.
Nov 24 13:21:51 np0005533938 systemd[1]: Starting Ceph mds.cephfs.compute-0.apnhwb for e5ee928f-099b-569b-93c9-ecf025cbb50d...
Nov 24 13:21:51 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:51 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90fba42c68b75cea4d14c3e8dd5d6656c7f557e53a24f33cfb345892dfe739be/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:51 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90fba42c68b75cea4d14c3e8dd5d6656c7f557e53a24f33cfb345892dfe739be/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:51 np0005533938 ceph-mon[74927]: Saving service rgw.rgw spec with placement compute-0
Nov 24 13:21:51 np0005533938 ceph-mon[74927]: Deploying daemon mds.cephfs.compute-0.apnhwb on compute-0
Nov 24 13:21:51 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2803639548' entity='client.rgw.rgw.compute-0.pecquu' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 24 13:21:51 np0005533938 podman[101293]: 2025-11-24 18:21:51.785370364 +0000 UTC m=+0.182930003 container init 07e5d1071f85e00cdb0cb865a3d3855c3a701ba2ffce2bcd214fa8654eaa832b (image=quay.io/ceph/ceph:v18, name=hardcore_curran, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 13:21:51 np0005533938 podman[101293]: 2025-11-24 18:21:51.798069399 +0000 UTC m=+0.195629018 container start 07e5d1071f85e00cdb0cb865a3d3855c3a701ba2ffce2bcd214fa8654eaa832b (image=quay.io/ceph/ceph:v18, name=hardcore_curran, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 13:21:51 np0005533938 podman[101293]: 2025-11-24 18:21:51.802775146 +0000 UTC m=+0.200334765 container attach 07e5d1071f85e00cdb0cb865a3d3855c3a701ba2ffce2bcd214fa8654eaa832b (image=quay.io/ceph/ceph:v18, name=hardcore_curran, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 13:21:51 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.b scrub starts
Nov 24 13:21:51 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.b scrub ok
Nov 24 13:21:51 np0005533938 podman[101360]: 2025-11-24 18:21:51.973959607 +0000 UTC m=+0.047442129 container create f8af585414f5203083e73145075b9783ac13a27d57e1366f7b97b40576de60b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mds-cephfs-compute-0-apnhwb, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:21:52 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40331e92f3b1f6298dd0ccff9a97cd00b7ee4478cf53a681ed64faafaa1286ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:52 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40331e92f3b1f6298dd0ccff9a97cd00b7ee4478cf53a681ed64faafaa1286ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:52 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40331e92f3b1f6298dd0ccff9a97cd00b7ee4478cf53a681ed64faafaa1286ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:52 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40331e92f3b1f6298dd0ccff9a97cd00b7ee4478cf53a681ed64faafaa1286ac/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.apnhwb supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:52 np0005533938 podman[101360]: 2025-11-24 18:21:51.944394383 +0000 UTC m=+0.017876925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:52 np0005533938 podman[101360]: 2025-11-24 18:21:52.043615126 +0000 UTC m=+0.117097678 container init f8af585414f5203083e73145075b9783ac13a27d57e1366f7b97b40576de60b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mds-cephfs-compute-0-apnhwb, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:52 np0005533938 podman[101360]: 2025-11-24 18:21:52.048155329 +0000 UTC m=+0.121637861 container start f8af585414f5203083e73145075b9783ac13a27d57e1366f7b97b40576de60b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mds-cephfs-compute-0-apnhwb, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 13:21:52 np0005533938 bash[101360]: f8af585414f5203083e73145075b9783ac13a27d57e1366f7b97b40576de60b1
Nov 24 13:21:52 np0005533938 systemd[1]: Started Ceph mds.cephfs.compute-0.apnhwb for e5ee928f-099b-569b-93c9-ecf025cbb50d.
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:21:52 np0005533938 ceph-mds[101380]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 13:21:52 np0005533938 ceph-mds[101380]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 24 13:21:52 np0005533938 ceph-mds[101380]: main not setting numa affinity
Nov 24 13:21:52 np0005533938 ceph-mds[101380]: pidfile_write: ignore empty --pid-file
Nov 24 13:21:52 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mds-cephfs-compute-0-apnhwb[101376]: starting mds.cephfs.compute-0.apnhwb at 
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:52 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb Updating MDS map to version 2 from mon.0
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:52 np0005533938 ceph-mgr[75218]: [progress INFO root] complete: finished ev a0179e9b-54c5-4ae3-8845-23c41a2c2c19 (Updating mds.cephfs deployment (+1 -> 1))
Nov 24 13:21:52 np0005533938 ceph-mgr[75218]: [progress INFO root] Completed event a0179e9b-54c5-4ae3-8845-23c41a2c2c19 (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2803639548' entity='client.rgw.rgw.compute-0.pecquu' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 24 13:21:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 48 pg[8.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:52 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14269 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 13:21:52 np0005533938 hardcore_curran[101310]: 
Nov 24 13:21:52 np0005533938 hardcore_curran[101310]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 24 13:21:52 np0005533938 systemd[1]: libpod-07e5d1071f85e00cdb0cb865a3d3855c3a701ba2ffce2bcd214fa8654eaa832b.scope: Deactivated successfully.
Nov 24 13:21:52 np0005533938 conmon[101310]: conmon 07e5d1071f85e00cdb0c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-07e5d1071f85e00cdb0cb865a3d3855c3a701ba2ffce2bcd214fa8654eaa832b.scope/container/memory.events
Nov 24 13:21:52 np0005533938 podman[101293]: 2025-11-24 18:21:52.388941301 +0000 UTC m=+0.786500930 container died 07e5d1071f85e00cdb0cb865a3d3855c3a701ba2ffce2bcd214fa8654eaa832b (image=quay.io/ceph/ceph:v18, name=hardcore_curran, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:52 np0005533938 systemd[1]: var-lib-containers-storage-overlay-90fba42c68b75cea4d14c3e8dd5d6656c7f557e53a24f33cfb345892dfe739be-merged.mount: Deactivated successfully.
Nov 24 13:21:52 np0005533938 podman[101293]: 2025-11-24 18:21:52.431110628 +0000 UTC m=+0.828670247 container remove 07e5d1071f85e00cdb0cb865a3d3855c3a701ba2ffce2bcd214fa8654eaa832b (image=quay.io/ceph/ceph:v18, name=hardcore_curran, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 13:21:52 np0005533938 systemd[1]: libpod-conmon-07e5d1071f85e00cdb0cb865a3d3855c3a701ba2ffce2bcd214fa8654eaa832b.scope: Deactivated successfully.
Nov 24 13:21:52 np0005533938 ansible-async_wrapper.py[100322]: Done in kid B.
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).mds e2 assigned standby [v2:192.168.122.100:6814/2272409054,v1:192.168.122.100:6815/2272409054] as mds.0
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.apnhwb assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).mds e3 new map
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0113#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-24T18:21:38.431267+0000#012modified#0112025-11-24T18:21:52.477258+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14267}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.apnhwb{0:14267} state up:creating seq 1 addr [v2:192.168.122.100:6814/2272409054,v1:192.168.122.100:6815/2272409054] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 24 13:21:52 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb Updating MDS map to version 3 from mon.0
Nov 24 13:21:52 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v108: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:21:52 np0005533938 ceph-mds[101380]: mds.0.3 handle_mds_map i am now mds.0.3
Nov 24 13:21:52 np0005533938 ceph-mds[101380]: mds.0.3 handle_mds_map state change up:standby --> up:creating
Nov 24 13:21:52 np0005533938 ceph-mds[101380]: mds.0.cache creating system inode with ino:0x1
Nov 24 13:21:52 np0005533938 ceph-mds[101380]: mds.0.cache creating system inode with ino:0x100
Nov 24 13:21:52 np0005533938 ceph-mds[101380]: mds.0.cache creating system inode with ino:0x600
Nov 24 13:21:52 np0005533938 ceph-mds[101380]: mds.0.cache creating system inode with ino:0x601
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2272409054,v1:192.168.122.100:6815/2272409054] up:boot
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.apnhwb=up:creating}
Nov 24 13:21:52 np0005533938 ceph-mds[101380]: mds.0.cache creating system inode with ino:0x602
Nov 24 13:21:52 np0005533938 ceph-mds[101380]: mds.0.cache creating system inode with ino:0x603
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.apnhwb"} v 0) v1
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.apnhwb"}]: dispatch
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).mds e3 all = 0
Nov 24 13:21:52 np0005533938 ceph-mds[101380]: mds.0.cache creating system inode with ino:0x604
Nov 24 13:21:52 np0005533938 ceph-mds[101380]: mds.0.cache creating system inode with ino:0x605
Nov 24 13:21:52 np0005533938 ceph-mds[101380]: mds.0.cache creating system inode with ino:0x606
Nov 24 13:21:52 np0005533938 ceph-mds[101380]: mds.0.cache creating system inode with ino:0x607
Nov 24 13:21:52 np0005533938 ceph-mds[101380]: mds.0.cache creating system inode with ino:0x608
Nov 24 13:21:52 np0005533938 ceph-mds[101380]: mds.0.cache creating system inode with ino:0x609
Nov 24 13:21:52 np0005533938 ceph-mds[101380]: mds.0.3 creating_done
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.apnhwb is now active in filesystem cephfs as rank 0
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2803639548' entity='client.rgw.rgw.compute-0.pecquu' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: daemon mds.cephfs.compute-0.apnhwb assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: Cluster is now healthy
Nov 24 13:21:52 np0005533938 ceph-mon[74927]: daemon mds.cephfs.compute-0.apnhwb is now active in filesystem cephfs as rank 0
Nov 24 13:21:52 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 24 13:21:52 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 24 13:21:52 np0005533938 podman[101664]: 2025-11-24 18:21:52.862315325 +0000 UTC m=+0.061226931 container exec 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:52 np0005533938 podman[101664]: 2025-11-24 18:21:52.955182601 +0000 UTC m=+0.154094097 container exec_died 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:21:53 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Nov 24 13:21:53 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Nov 24 13:21:53 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 24 13:21:53 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 24 13:21:53 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 24 13:21:53 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 24 13:21:53 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2803639548' entity='client.rgw.rgw.compute-0.pecquu' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 24 13:21:53 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 49 pg[9.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:53 np0005533938 python3[101812]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:53 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).mds e4 new map
Nov 24 13:21:53 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-24T18:21:38.431267+0000#012modified#0112025-11-24T18:21:53.481295+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14267}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.apnhwb{0:14267} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/2272409054,v1:192.168.122.100:6815/2272409054] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 24 13:21:53 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb Updating MDS map to version 4 from mon.0
Nov 24 13:21:53 np0005533938 ceph-mds[101380]: mds.0.3 handle_mds_map i am now mds.0.3
Nov 24 13:21:53 np0005533938 ceph-mds[101380]: mds.0.3 handle_mds_map state change up:creating --> up:active
Nov 24 13:21:53 np0005533938 ceph-mds[101380]: mds.0.3 recovery_done -- successful recovery!
Nov 24 13:21:53 np0005533938 ceph-mds[101380]: mds.0.3 active_start
Nov 24 13:21:53 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2272409054,v1:192.168.122.100:6815/2272409054] up:active
Nov 24 13:21:53 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.apnhwb=up:active}
Nov 24 13:21:53 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:21:53 np0005533938 podman[101847]: 2025-11-24 18:21:53.522853396 +0000 UTC m=+0.040488316 container create f30730c0037c3adb5ceedca69430eeb296ee8829772df05b8a69a655d516061d (image=quay.io/ceph/ceph:v18, name=relaxed_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 13:21:53 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:53 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:21:53 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:53 np0005533938 systemd[1]: Started libpod-conmon-f30730c0037c3adb5ceedca69430eeb296ee8829772df05b8a69a655d516061d.scope.
Nov 24 13:21:53 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:53 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa460109a9715a68b41f88925b042334a341d03b970933786e6bd688b1a16f6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:53 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa460109a9715a68b41f88925b042334a341d03b970933786e6bd688b1a16f6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:53 np0005533938 podman[101847]: 2025-11-24 18:21:53.590415143 +0000 UTC m=+0.108050083 container init f30730c0037c3adb5ceedca69430eeb296ee8829772df05b8a69a655d516061d (image=quay.io/ceph/ceph:v18, name=relaxed_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 13:21:53 np0005533938 podman[101847]: 2025-11-24 18:21:53.596277689 +0000 UTC m=+0.113912609 container start f30730c0037c3adb5ceedca69430eeb296ee8829772df05b8a69a655d516061d (image=quay.io/ceph/ceph:v18, name=relaxed_carson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:21:53 np0005533938 podman[101847]: 2025-11-24 18:21:53.599300874 +0000 UTC m=+0.116935824 container attach f30730c0037c3adb5ceedca69430eeb296ee8829772df05b8a69a655d516061d (image=quay.io/ceph/ceph:v18, name=relaxed_carson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:21:53 np0005533938 podman[101847]: 2025-11-24 18:21:53.505276639 +0000 UTC m=+0.022911579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:53 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2803639548' entity='client.rgw.rgw.compute-0.pecquu' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 24 13:21:53 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:53 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:53 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 24 13:21:53 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 24 13:21:54 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14271 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 24 13:21:54 np0005533938 relaxed_carson[101885]: 
Nov 24 13:21:54 np0005533938 relaxed_carson[101885]: [{"container_id": "cd3250af4db7", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.38%", "created": "2025-11-24T18:20:03.041398Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-11-24T18:20:03.115124Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T18:21:53.514299Z", "memory_usage": 11628707, "ports": [], "service_name": "crash", "started": "2025-11-24T18:20:02.696421Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d@crash.compute-0", "version": "18.2.7"}, {"container_id": "f8af585414f5", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "11.80%", "created": "2025-11-24T18:21:52.063154Z", "daemon_id": "cephfs.compute-0.apnhwb", "daemon_name": "mds.cephfs.compute-0.apnhwb", "daemon_type": "mds", "events": ["2025-11-24T18:21:52.111991Z daemon:mds.cephfs.compute-0.apnhwb [INFO] \"Deployed mds.cephfs.compute-0.apnhwb on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T18:21:53.514588Z", "memory_usage": 18171822, "ports": [], "service_name": "mds.cephfs", "started": "2025-11-24T18:21:51.951844Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d@mds.cephfs.compute-0.apnhwb", "version": "18.2.7"}, {"container_id": "9eef9f776910", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "26.19%", "created": "2025-11-24T18:18:49.722543Z", "daemon_id": "compute-0.dfqptp", "daemon_name": "mgr.compute-0.dfqptp", "daemon_type": "mgr", "events": ["2025-11-24T18:20:08.021719Z daemon:mgr.compute-0.dfqptp [INFO] \"Reconfigured mgr.compute-0.dfqptp on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T18:21:53.514239Z", "memory_usage": 552180121, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-24T18:18:49.616364Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d@mgr.compute-0.dfqptp", "version": "18.2.7"}, {"container_id": "6770cfc50a03", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.07%", "created": "2025-11-24T18:18:44.899424Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-11-24T18:20:07.331253Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T18:21:53.514163Z", "memory_request": 2147483648, "memory_usage": 40076574, "ports": [], "service_name": "mon", "started": "2025-11-24T18:18:47.332196Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d@mon.compute-0", "version": "18.2.7"}, {"container_id": "9c8b4f7ebd62", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.50%", "created": "2025-11-24T18:20:32.372150Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-11-24T18:20:32.438644Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T18:21:53.514358Z", "memory_request": 4294967296, "memory_usage": 66542632, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-24T18:20:32.244328Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d@osd.0", "version": "18.2.7"}, {"container_id": "edbd9c794ff6", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.58%", "created": "2025-11-24T18:20:38.427303Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-11-24T18:20:39.747438Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T18:21:53.514419Z", "memory_request": 4294967296, "memory_usage": 67119349, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-24T18:20:37.844555Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d@osd.1", "version": "18.2.7"}, {"container_id": "d4b4bd73407e", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.69%", "created": "2025-11-24T18:20:46.644510Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-11-24T18:20:46.798149Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-24T18:21:53.514474Z", "memory_request": 4294967296, "memory_usage": 65840087, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-24T18:20:45.786566Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d@osd.2", "version": "18.2.7"}, {"container_id": "1905e5a7aafe", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.76%", "created": "2025-11-24T18:21:50.280110Z", "daemon_id": "rgw.compute-0.pecquu", "daemon_name": "rgw.rgw.compute-0.pecquu", "daemon_type": "rgw", "events": ["2025-11-24
Nov 24 13:21:54 np0005533938 systemd[1]: libpod-f30730c0037c3adb5ceedca69430eeb296ee8829772df05b8a69a655d516061d.scope: Deactivated successfully.
Nov 24 13:21:54 np0005533938 podman[101847]: 2025-11-24 18:21:54.14683859 +0000 UTC m=+0.664473550 container died f30730c0037c3adb5ceedca69430eeb296ee8829772df05b8a69a655d516061d (image=quay.io/ceph/ceph:v18, name=relaxed_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:21:54 np0005533938 systemd[1]: var-lib-containers-storage-overlay-dfa460109a9715a68b41f88925b042334a341d03b970933786e6bd688b1a16f6-merged.mount: Deactivated successfully.
Nov 24 13:21:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:21:54 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:21:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:21:54 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:21:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:21:54 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:54 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev ab8de85c-1473-4826-be33-9c2210e6893e does not exist
Nov 24 13:21:54 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev d6a4a54f-8841-42bb-a35c-2632edc0bc9c does not exist
Nov 24 13:21:54 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 94d59412-9245-4efe-a215-68e269c1efd3 does not exist
Nov 24 13:21:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:21:54 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:21:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:21:54 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:21:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:21:54 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:21:54 np0005533938 podman[101847]: 2025-11-24 18:21:54.20322765 +0000 UTC m=+0.720862600 container remove f30730c0037c3adb5ceedca69430eeb296ee8829772df05b8a69a655d516061d (image=quay.io/ceph/ceph:v18, name=relaxed_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 13:21:54 np0005533938 systemd[1]: libpod-conmon-f30730c0037c3adb5ceedca69430eeb296ee8829772df05b8a69a655d516061d.scope: Deactivated successfully.
Nov 24 13:21:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 24 13:21:54 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2803639548' entity='client.rgw.rgw.compute-0.pecquu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 24 13:21:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 24 13:21:54 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 24 13:21:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 50 pg[9.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:54 np0005533938 rsyslogd[1008]: message too long (8589) with configured size 8096, begin of message is: [{"container_id": "cd3250af4db7", "container_image_digests": ["quay.io/ceph/ceph [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 24 13:21:54 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v111: 195 pgs: 195 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Nov 24 13:21:54 np0005533938 podman[102175]: 2025-11-24 18:21:54.721485899 +0000 UTC m=+0.058802541 container create 059167fdcd3f248af4334b4bf7c02b4c94713b63bdba1139ff618e6a71ffb31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hofstadter, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 13:21:54 np0005533938 ceph-mgr[75218]: [progress INFO root] Writing back 11 completed events
Nov 24 13:21:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 24 13:21:54 np0005533938 systemd[1]: Started libpod-conmon-059167fdcd3f248af4334b4bf7c02b4c94713b63bdba1139ff618e6a71ffb31c.scope.
Nov 24 13:21:54 np0005533938 podman[102175]: 2025-11-24 18:21:54.683165207 +0000 UTC m=+0.020481869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:54 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:54 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:54 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:21:54 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:54 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:21:54 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2803639548' entity='client.rgw.rgw.compute-0.pecquu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 24 13:21:54 np0005533938 podman[102175]: 2025-11-24 18:21:54.837086449 +0000 UTC m=+0.174403111 container init 059167fdcd3f248af4334b4bf7c02b4c94713b63bdba1139ff618e6a71ffb31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hofstadter, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:21:54 np0005533938 podman[102175]: 2025-11-24 18:21:54.844863422 +0000 UTC m=+0.182180064 container start 059167fdcd3f248af4334b4bf7c02b4c94713b63bdba1139ff618e6a71ffb31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hofstadter, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:54 np0005533938 condescending_hofstadter[102191]: 167 167
Nov 24 13:21:54 np0005533938 systemd[1]: libpod-059167fdcd3f248af4334b4bf7c02b4c94713b63bdba1139ff618e6a71ffb31c.scope: Deactivated successfully.
Nov 24 13:21:54 np0005533938 podman[102175]: 2025-11-24 18:21:54.853885656 +0000 UTC m=+0.191202318 container attach 059167fdcd3f248af4334b4bf7c02b4c94713b63bdba1139ff618e6a71ffb31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hofstadter, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 13:21:54 np0005533938 podman[102175]: 2025-11-24 18:21:54.854210304 +0000 UTC m=+0.191526946 container died 059167fdcd3f248af4334b4bf7c02b4c94713b63bdba1139ff618e6a71ffb31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:21:54 np0005533938 systemd[1]: var-lib-containers-storage-overlay-5cb39837a79624d44159fb23d6647fd1301cfb28bdd11af6c554bbaeada41ad5-merged.mount: Deactivated successfully.
Nov 24 13:21:54 np0005533938 podman[102175]: 2025-11-24 18:21:54.94224407 +0000 UTC m=+0.279560712 container remove 059167fdcd3f248af4334b4bf7c02b4c94713b63bdba1139ff618e6a71ffb31c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 13:21:54 np0005533938 systemd[1]: libpod-conmon-059167fdcd3f248af4334b4bf7c02b4c94713b63bdba1139ff618e6a71ffb31c.scope: Deactivated successfully.
Nov 24 13:21:55 np0005533938 podman[102241]: 2025-11-24 18:21:55.120466966 +0000 UTC m=+0.073682751 container create 660b87947cca8b3461e548ffe674be0aeee87bbf8056948cacfa63cd6b09243b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hofstadter, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:21:55 np0005533938 python3[102235]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:55 np0005533938 podman[102241]: 2025-11-24 18:21:55.068442314 +0000 UTC m=+0.021658109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:55 np0005533938 systemd[1]: Started libpod-conmon-660b87947cca8b3461e548ffe674be0aeee87bbf8056948cacfa63cd6b09243b.scope.
Nov 24 13:21:55 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:55 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c66c4ba95083a64565bd15fbc98703382bf9b55ae59f1a01b9fe99a0a5612fb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:55 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c66c4ba95083a64565bd15fbc98703382bf9b55ae59f1a01b9fe99a0a5612fb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:55 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c66c4ba95083a64565bd15fbc98703382bf9b55ae59f1a01b9fe99a0a5612fb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:55 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c66c4ba95083a64565bd15fbc98703382bf9b55ae59f1a01b9fe99a0a5612fb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:55 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c66c4ba95083a64565bd15fbc98703382bf9b55ae59f1a01b9fe99a0a5612fb2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:55 np0005533938 podman[102257]: 2025-11-24 18:21:55.264305308 +0000 UTC m=+0.093576025 container create 188252963ccb36811691f49fa52efa99e8e45df6de4228a21d202be8b0c886c3 (image=quay.io/ceph/ceph:v18, name=adoring_leakey, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 13:21:55 np0005533938 podman[102257]: 2025-11-24 18:21:55.19149841 +0000 UTC m=+0.020769117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:55 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 24 13:21:55 np0005533938 systemd[1]: Started libpod-conmon-188252963ccb36811691f49fa52efa99e8e45df6de4228a21d202be8b0c886c3.scope.
Nov 24 13:21:55 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:55 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73812ab7f05a343958358db0b9c5f54871acb45aca0bfbe3a21882006495b20f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:55 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73812ab7f05a343958358db0b9c5f54871acb45aca0bfbe3a21882006495b20f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:55 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 24 13:21:55 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 24 13:21:55 np0005533938 podman[102241]: 2025-11-24 18:21:55.45850878 +0000 UTC m=+0.411724605 container init 660b87947cca8b3461e548ffe674be0aeee87bbf8056948cacfa63cd6b09243b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hofstadter, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:21:55 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 24 13:21:55 np0005533938 podman[102241]: 2025-11-24 18:21:55.46538254 +0000 UTC m=+0.418598315 container start 660b87947cca8b3461e548ffe674be0aeee87bbf8056948cacfa63cd6b09243b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hofstadter, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 13:21:55 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 24 13:21:55 np0005533938 podman[102241]: 2025-11-24 18:21:55.523438632 +0000 UTC m=+0.476654417 container attach 660b87947cca8b3461e548ffe674be0aeee87bbf8056948cacfa63cd6b09243b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hofstadter, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:21:55 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 24 13:21:55 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2803639548' entity='client.rgw.rgw.compute-0.pecquu' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 13:21:55 np0005533938 podman[102257]: 2025-11-24 18:21:55.716702481 +0000 UTC m=+0.545973198 container init 188252963ccb36811691f49fa52efa99e8e45df6de4228a21d202be8b0c886c3 (image=quay.io/ceph/ceph:v18, name=adoring_leakey, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:21:55 np0005533938 podman[102257]: 2025-11-24 18:21:55.725436398 +0000 UTC m=+0.554707115 container start 188252963ccb36811691f49fa52efa99e8e45df6de4228a21d202be8b0c886c3 (image=quay.io/ceph/ceph:v18, name=adoring_leakey, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 13:21:55 np0005533938 podman[102257]: 2025-11-24 18:21:55.736590735 +0000 UTC m=+0.565861462 container attach 188252963ccb36811691f49fa52efa99e8e45df6de4228a21d202be8b0c886c3 (image=quay.io/ceph/ceph:v18, name=adoring_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 13:21:55 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 24 13:21:55 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 24 13:21:55 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:21:55 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2803639548' entity='client.rgw.rgw.compute-0.pecquu' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 24 13:21:56 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.c deep-scrub starts
Nov 24 13:21:56 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 51 pg[10.0( empty local-lis/les=0/0 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [2] r=0 lpr=51 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:56 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.c deep-scrub ok
Nov 24 13:21:56 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 24 13:21:56 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3758805489' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 24 13:21:56 np0005533938 adoring_leakey[102275]: 
Nov 24 13:21:56 np0005533938 adoring_leakey[102275]: {"fsid":"e5ee928f-099b-569b-93c9-ecf025cbb50d","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":188,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":51,"num_osds":3,"num_up_osds":3,"osd_up_since":1764008452,"num_in_osds":3,"osd_in_since":1764008421,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":195}],"num_pgs":195,"num_pools":9,"num_objects":27,"data_bytes":463028,"bytes_used":84414464,"bytes_avail":64327512064,"bytes_total":64411926528,"read_bytes_sec":1023,"write_bytes_sec":4606,"read_op_per_sec":0,"write_op_per_sec":11},"fsmap":{"epoch":4,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.apnhwb","status":"up:active","gid":14267}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":5,"modified":"2025-11-24T18:21:54.483217+0000","services":{"mds":{"daemons":{"summary":"","cephfs.compute-0.apnhwb":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Nov 24 13:21:56 np0005533938 systemd[1]: libpod-188252963ccb36811691f49fa52efa99e8e45df6de4228a21d202be8b0c886c3.scope: Deactivated successfully.
Nov 24 13:21:56 np0005533938 podman[102257]: 2025-11-24 18:21:56.322716619 +0000 UTC m=+1.151987306 container died 188252963ccb36811691f49fa52efa99e8e45df6de4228a21d202be8b0c886c3 (image=quay.io/ceph/ceph:v18, name=adoring_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 13:21:56 np0005533938 systemd[1]: var-lib-containers-storage-overlay-73812ab7f05a343958358db0b9c5f54871acb45aca0bfbe3a21882006495b20f-merged.mount: Deactivated successfully.
Nov 24 13:21:56 np0005533938 podman[102257]: 2025-11-24 18:21:56.363666405 +0000 UTC m=+1.192937092 container remove 188252963ccb36811691f49fa52efa99e8e45df6de4228a21d202be8b0c886c3 (image=quay.io/ceph/ceph:v18, name=adoring_leakey, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:21:56 np0005533938 systemd[1]: libpod-conmon-188252963ccb36811691f49fa52efa99e8e45df6de4228a21d202be8b0c886c3.scope: Deactivated successfully.
Nov 24 13:21:56 np0005533938 sweet_hofstadter[102263]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:21:56 np0005533938 sweet_hofstadter[102263]: --> relative data size: 1.0
Nov 24 13:21:56 np0005533938 sweet_hofstadter[102263]: --> All data devices are unavailable
Nov 24 13:21:56 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 24 13:21:56 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2803639548' entity='client.rgw.rgw.compute-0.pecquu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 24 13:21:56 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 24 13:21:56 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 24 13:21:56 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 52 pg[10.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [2] r=0 lpr=51 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:56 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v114: 196 pgs: 1 unknown, 195 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Nov 24 13:21:56 np0005533938 systemd[1]: libpod-660b87947cca8b3461e548ffe674be0aeee87bbf8056948cacfa63cd6b09243b.scope: Deactivated successfully.
Nov 24 13:21:56 np0005533938 podman[102337]: 2025-11-24 18:21:56.565746083 +0000 UTC m=+0.048690280 container died 660b87947cca8b3461e548ffe674be0aeee87bbf8056948cacfa63cd6b09243b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 13:21:56 np0005533938 systemd[1]: var-lib-containers-storage-overlay-c66c4ba95083a64565bd15fbc98703382bf9b55ae59f1a01b9fe99a0a5612fb2-merged.mount: Deactivated successfully.
Nov 24 13:21:56 np0005533938 podman[102337]: 2025-11-24 18:21:56.624752738 +0000 UTC m=+0.107696925 container remove 660b87947cca8b3461e548ffe674be0aeee87bbf8056948cacfa63cd6b09243b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hofstadter, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:56 np0005533938 systemd[1]: libpod-conmon-660b87947cca8b3461e548ffe674be0aeee87bbf8056948cacfa63cd6b09243b.scope: Deactivated successfully.
Nov 24 13:21:56 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Nov 24 13:21:56 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Nov 24 13:21:57 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2803639548' entity='client.rgw.rgw.compute-0.pecquu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 24 13:21:57 np0005533938 python3[102509]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:57 np0005533938 podman[102530]: 2025-11-24 18:21:57.334226214 +0000 UTC m=+0.040576018 container create 299071e2f5701828bbe05aa6e073481195e99937a3a999aa0f80aa421f76cc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 13:21:57 np0005533938 systemd[1]: Started libpod-conmon-299071e2f5701828bbe05aa6e073481195e99937a3a999aa0f80aa421f76cc4b.scope.
Nov 24 13:21:57 np0005533938 podman[102542]: 2025-11-24 18:21:57.370494465 +0000 UTC m=+0.042447965 container create fc703ed1bb1c5b511eb5a508be59b532653e01f3c79fc5f4ec79e19506d9d5b9 (image=quay.io/ceph/ceph:v18, name=vigilant_edison, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 13:21:57 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:57 np0005533938 systemd[1]: Started libpod-conmon-fc703ed1bb1c5b511eb5a508be59b532653e01f3c79fc5f4ec79e19506d9d5b9.scope.
Nov 24 13:21:57 np0005533938 podman[102530]: 2025-11-24 18:21:57.411237936 +0000 UTC m=+0.117587780 container init 299071e2f5701828bbe05aa6e073481195e99937a3a999aa0f80aa421f76cc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mirzakhani, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:21:57 np0005533938 podman[102530]: 2025-11-24 18:21:57.316389461 +0000 UTC m=+0.022739285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:57 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 24 13:21:57 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:57 np0005533938 podman[102530]: 2025-11-24 18:21:57.421912301 +0000 UTC m=+0.128262105 container start 299071e2f5701828bbe05aa6e073481195e99937a3a999aa0f80aa421f76cc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 13:21:57 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f497881b2e35bef1de4bf4de0a56e9dd15fbbd4bc60bf74ffeb6e968bcc41991/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:57 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f497881b2e35bef1de4bf4de0a56e9dd15fbbd4bc60bf74ffeb6e968bcc41991/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:57 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 24 13:21:57 np0005533938 podman[102530]: 2025-11-24 18:21:57.425960902 +0000 UTC m=+0.132310736 container attach 299071e2f5701828bbe05aa6e073481195e99937a3a999aa0f80aa421f76cc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 13:21:57 np0005533938 sad_mirzakhani[102559]: 167 167
Nov 24 13:21:57 np0005533938 systemd[1]: libpod-299071e2f5701828bbe05aa6e073481195e99937a3a999aa0f80aa421f76cc4b.scope: Deactivated successfully.
Nov 24 13:21:57 np0005533938 podman[102542]: 2025-11-24 18:21:57.436512274 +0000 UTC m=+0.108465774 container init fc703ed1bb1c5b511eb5a508be59b532653e01f3c79fc5f4ec79e19506d9d5b9 (image=quay.io/ceph/ceph:v18, name=vigilant_edison, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:21:57 np0005533938 podman[102530]: 2025-11-24 18:21:57.441128349 +0000 UTC m=+0.147478153 container died 299071e2f5701828bbe05aa6e073481195e99937a3a999aa0f80aa421f76cc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mirzakhani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 24 13:21:57 np0005533938 podman[102542]: 2025-11-24 18:21:57.4420086 +0000 UTC m=+0.113962100 container start fc703ed1bb1c5b511eb5a508be59b532653e01f3c79fc5f4ec79e19506d9d5b9 (image=quay.io/ceph/ceph:v18, name=vigilant_edison, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:21:57 np0005533938 podman[102542]: 2025-11-24 18:21:57.354224391 +0000 UTC m=+0.026177911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:57 np0005533938 podman[102542]: 2025-11-24 18:21:57.452671435 +0000 UTC m=+0.124624965 container attach fc703ed1bb1c5b511eb5a508be59b532653e01f3c79fc5f4ec79e19506d9d5b9 (image=quay.io/ceph/ceph:v18, name=vigilant_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 13:21:57 np0005533938 systemd[1]: var-lib-containers-storage-overlay-c47f54ea5fcd053f91b45d9534804e811648e20ece17d79909d8e599009b80c5-merged.mount: Deactivated successfully.
Nov 24 13:21:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 24 13:21:57 np0005533938 podman[102530]: 2025-11-24 18:21:57.474887047 +0000 UTC m=+0.181236851 container remove 299071e2f5701828bbe05aa6e073481195e99937a3a999aa0f80aa421f76cc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mirzakhani, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:21:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 24 13:21:57 np0005533938 systemd[1]: libpod-conmon-299071e2f5701828bbe05aa6e073481195e99937a3a999aa0f80aa421f76cc4b.scope: Deactivated successfully.
Nov 24 13:21:57 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 24 13:21:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:21:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 24 13:21:57 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2409275648' entity='client.rgw.rgw.compute-0.pecquu' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 13:21:57 np0005533938 podman[102589]: 2025-11-24 18:21:57.630200603 +0000 UTC m=+0.023013042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:21:58 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 24 13:21:58 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1538751053' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 24 13:21:58 np0005533938 podman[102589]: 2025-11-24 18:21:58.384692828 +0000 UTC m=+0.777505277 container create 2f3951d4b7eef42ae8bbf5155d1bdee7b93b1ddd4187151e9ec1982071b3ecc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:21:58 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2409275648' entity='client.rgw.rgw.compute-0.pecquu' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 24 13:21:58 np0005533938 vigilant_edison[102564]: 
Nov 24 13:21:58 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 53 pg[11.0( empty local-lis/les=0/0 n=0 ec=53/53 lis/c=0/0 les/c/f=0/0/0 sis=53) [1] r=0 lpr=53 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:21:58 np0005533938 vigilant_edison[102564]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.pecquu","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Nov 24 13:21:58 np0005533938 systemd[1]: libpod-fc703ed1bb1c5b511eb5a508be59b532653e01f3c79fc5f4ec79e19506d9d5b9.scope: Deactivated successfully.
Nov 24 13:21:58 np0005533938 podman[102542]: 2025-11-24 18:21:58.421853121 +0000 UTC m=+1.093806621 container died fc703ed1bb1c5b511eb5a508be59b532653e01f3c79fc5f4ec79e19506d9d5b9 (image=quay.io/ceph/ceph:v18, name=vigilant_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:21:58 np0005533938 systemd[1]: Started libpod-conmon-2f3951d4b7eef42ae8bbf5155d1bdee7b93b1ddd4187151e9ec1982071b3ecc4.scope.
Nov 24 13:21:58 np0005533938 systemd[1]: var-lib-containers-storage-overlay-f497881b2e35bef1de4bf4de0a56e9dd15fbbd4bc60bf74ffeb6e968bcc41991-merged.mount: Deactivated successfully.
Nov 24 13:21:58 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:58 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98dce4562069d5590fcbbcfbd9351416220843b176c85e16102934f8844f6929/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:58 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98dce4562069d5590fcbbcfbd9351416220843b176c85e16102934f8844f6929/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:58 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98dce4562069d5590fcbbcfbd9351416220843b176c85e16102934f8844f6929/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:58 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98dce4562069d5590fcbbcfbd9351416220843b176c85e16102934f8844f6929/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:58 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 24 13:21:58 np0005533938 podman[102542]: 2025-11-24 18:21:58.479757438 +0000 UTC m=+1.151710938 container remove fc703ed1bb1c5b511eb5a508be59b532653e01f3c79fc5f4ec79e19506d9d5b9 (image=quay.io/ceph/ceph:v18, name=vigilant_edison, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 13:21:58 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v116: 197 pgs: 1 unknown, 196 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 988 B/s rd, 1.2 KiB/s wr, 4 op/s
Nov 24 13:21:58 np0005533938 systemd[1]: libpod-conmon-fc703ed1bb1c5b511eb5a508be59b532653e01f3c79fc5f4ec79e19506d9d5b9.scope: Deactivated successfully.
Nov 24 13:21:58 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2409275648' entity='client.rgw.rgw.compute-0.pecquu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 24 13:21:58 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 24 13:21:58 np0005533938 podman[102589]: 2025-11-24 18:21:58.496304099 +0000 UTC m=+0.889116538 container init 2f3951d4b7eef42ae8bbf5155d1bdee7b93b1ddd4187151e9ec1982071b3ecc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:21:58 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 24 13:21:58 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 24 13:21:58 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2409275648' entity='client.rgw.rgw.compute-0.pecquu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 13:21:58 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 54 pg[11.0( empty local-lis/les=53/54 n=0 ec=53/53 lis/c=0/0 les/c/f=0/0/0 sis=53) [1] r=0 lpr=53 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:21:58 np0005533938 podman[102589]: 2025-11-24 18:21:58.511990639 +0000 UTC m=+0.904803058 container start 2f3951d4b7eef42ae8bbf5155d1bdee7b93b1ddd4187151e9ec1982071b3ecc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 13:21:58 np0005533938 podman[102589]: 2025-11-24 18:21:58.519648749 +0000 UTC m=+0.912461178 container attach 2f3951d4b7eef42ae8bbf5155d1bdee7b93b1ddd4187151e9ec1982071b3ecc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]: {
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:    "0": [
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:        {
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "devices": [
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "/dev/loop3"
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            ],
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "lv_name": "ceph_lv0",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "lv_size": "21470642176",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "name": "ceph_lv0",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "tags": {
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.cluster_name": "ceph",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.crush_device_class": "",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.encrypted": "0",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.osd_id": "0",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.type": "block",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.vdo": "0"
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            },
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "type": "block",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "vg_name": "ceph_vg0"
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:        }
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:    ],
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:    "1": [
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:        {
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "devices": [
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "/dev/loop4"
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            ],
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "lv_name": "ceph_lv1",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "lv_size": "21470642176",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "name": "ceph_lv1",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "tags": {
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.cluster_name": "ceph",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.crush_device_class": "",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.encrypted": "0",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.osd_id": "1",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.type": "block",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.vdo": "0"
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            },
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "type": "block",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "vg_name": "ceph_vg1"
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:        }
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:    ],
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:    "2": [
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:        {
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "devices": [
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "/dev/loop5"
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            ],
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "lv_name": "ceph_lv2",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "lv_size": "21470642176",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "name": "ceph_lv2",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "tags": {
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.cluster_name": "ceph",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.crush_device_class": "",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.encrypted": "0",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.osd_id": "2",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.type": "block",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:                "ceph.vdo": "0"
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            },
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "type": "block",
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:            "vg_name": "ceph_vg2"
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:        }
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]:    ]
Nov 24 13:21:59 np0005533938 elegant_vaughan[102633]: }
Nov 24 13:21:59 np0005533938 systemd[1]: libpod-2f3951d4b7eef42ae8bbf5155d1bdee7b93b1ddd4187151e9ec1982071b3ecc4.scope: Deactivated successfully.
Nov 24 13:21:59 np0005533938 podman[102589]: 2025-11-24 18:21:59.270798611 +0000 UTC m=+1.663611040 container died 2f3951d4b7eef42ae8bbf5155d1bdee7b93b1ddd4187151e9ec1982071b3ecc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:21:59 np0005533938 systemd[1]: var-lib-containers-storage-overlay-98dce4562069d5590fcbbcfbd9351416220843b176c85e16102934f8844f6929-merged.mount: Deactivated successfully.
Nov 24 13:21:59 np0005533938 podman[102589]: 2025-11-24 18:21:59.339536467 +0000 UTC m=+1.732348886 container remove 2f3951d4b7eef42ae8bbf5155d1bdee7b93b1ddd4187151e9ec1982071b3ecc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 13:21:59 np0005533938 systemd[1]: libpod-conmon-2f3951d4b7eef42ae8bbf5155d1bdee7b93b1ddd4187151e9ec1982071b3ecc4.scope: Deactivated successfully.
Nov 24 13:21:59 np0005533938 ceph-mon[74927]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 13:21:59 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2409275648' entity='client.rgw.rgw.compute-0.pecquu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 24 13:21:59 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2409275648' entity='client.rgw.rgw.compute-0.pecquu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 24 13:21:59 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 24 13:21:59 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 24 13:21:59 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 24 13:21:59 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2409275648' entity='client.rgw.rgw.compute-0.pecquu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 24 13:21:59 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 24 13:21:59 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 24 13:21:59 np0005533938 python3[102690]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:21:59 np0005533938 radosgw[100923]: LDAP not started since no server URIs were provided in the configuration.
Nov 24 13:21:59 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-rgw-rgw-compute-0-pecquu[100919]: 2025-11-24T18:21:59.655+0000 7fbb4adbf940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 24 13:21:59 np0005533938 radosgw[100923]: framework: beast
Nov 24 13:21:59 np0005533938 radosgw[100923]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 24 13:21:59 np0005533938 radosgw[100923]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 24 13:21:59 np0005533938 radosgw[100923]: starting handler: beast
Nov 24 13:21:59 np0005533938 podman[102737]: 2025-11-24 18:21:59.696508151 +0000 UTC m=+0.080539041 container create 1d3cce47d84e6281963fac1a33402ea9fd15932628e93b76e701ff4699cd5354 (image=quay.io/ceph/ceph:v18, name=awesome_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 13:21:59 np0005533938 radosgw[100923]: set uid:gid to 167:167 (ceph:ceph)
Nov 24 13:21:59 np0005533938 systemd[1]: Started libpod-conmon-1d3cce47d84e6281963fac1a33402ea9fd15932628e93b76e701ff4699cd5354.scope.
Nov 24 13:21:59 np0005533938 radosgw[100923]: mgrc service_daemon_register rgw.14275 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.pecquu,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=397255ee-4d69-4fa3-899d-5bd80ba5189e,zone_name=default,zonegroup_id=d82e3822-fde1-4c19-b950-8e22988d5e44,zonegroup_name=default}
Nov 24 13:21:59 np0005533938 podman[102737]: 2025-11-24 18:21:59.666400034 +0000 UTC m=+0.050430954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:21:59 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:21:59 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fc6e627d6ee8efa5c4c3ab9ae7b8b325aed826e4aa1daef8191402bb5015dd5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:59 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fc6e627d6ee8efa5c4c3ab9ae7b8b325aed826e4aa1daef8191402bb5015dd5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:21:59 np0005533938 podman[102737]: 2025-11-24 18:21:59.783393119 +0000 UTC m=+0.167423989 container init 1d3cce47d84e6281963fac1a33402ea9fd15932628e93b76e701ff4699cd5354 (image=quay.io/ceph/ceph:v18, name=awesome_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 13:21:59 np0005533938 podman[102737]: 2025-11-24 18:21:59.793372487 +0000 UTC m=+0.177403347 container start 1d3cce47d84e6281963fac1a33402ea9fd15932628e93b76e701ff4699cd5354 (image=quay.io/ceph/ceph:v18, name=awesome_beaver, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:21:59 np0005533938 podman[102737]: 2025-11-24 18:21:59.798675788 +0000 UTC m=+0.182706638 container attach 1d3cce47d84e6281963fac1a33402ea9fd15932628e93b76e701ff4699cd5354 (image=quay.io/ceph/ceph:v18, name=awesome_beaver, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 13:22:00 np0005533938 podman[103389]: 2025-11-24 18:22:00.08065388 +0000 UTC m=+0.058793121 container create 21dacaeabc5631810955a809d28a044549d0d667e16608595b2ebbeb49883404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_northcutt, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 13:22:00 np0005533938 systemd[1]: Started libpod-conmon-21dacaeabc5631810955a809d28a044549d0d667e16608595b2ebbeb49883404.scope.
Nov 24 13:22:00 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:22:00 np0005533938 podman[103389]: 2025-11-24 18:22:00.057265609 +0000 UTC m=+0.035404880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:22:00 np0005533938 podman[103389]: 2025-11-24 18:22:00.151569341 +0000 UTC m=+0.129708592 container init 21dacaeabc5631810955a809d28a044549d0d667e16608595b2ebbeb49883404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_northcutt, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 13:22:00 np0005533938 podman[103389]: 2025-11-24 18:22:00.159081037 +0000 UTC m=+0.137220278 container start 21dacaeabc5631810955a809d28a044549d0d667e16608595b2ebbeb49883404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:22:00 np0005533938 wonderful_northcutt[103406]: 167 167
Nov 24 13:22:00 np0005533938 systemd[1]: libpod-21dacaeabc5631810955a809d28a044549d0d667e16608595b2ebbeb49883404.scope: Deactivated successfully.
Nov 24 13:22:00 np0005533938 podman[103389]: 2025-11-24 18:22:00.164806259 +0000 UTC m=+0.142945530 container attach 21dacaeabc5631810955a809d28a044549d0d667e16608595b2ebbeb49883404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_northcutt, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:22:00 np0005533938 conmon[103406]: conmon 21dacaeabc5631810955 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-21dacaeabc5631810955a809d28a044549d0d667e16608595b2ebbeb49883404.scope/container/memory.events
Nov 24 13:22:00 np0005533938 podman[103389]: 2025-11-24 18:22:00.166765448 +0000 UTC m=+0.144904689 container died 21dacaeabc5631810955a809d28a044549d0d667e16608595b2ebbeb49883404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:22:00 np0005533938 systemd[1]: var-lib-containers-storage-overlay-983b8595d8b201846236fd34cb3baf4ce0c676e94bab630eea14316dc2998921-merged.mount: Deactivated successfully.
Nov 24 13:22:00 np0005533938 podman[103389]: 2025-11-24 18:22:00.259807368 +0000 UTC m=+0.237946609 container remove 21dacaeabc5631810955a809d28a044549d0d667e16608595b2ebbeb49883404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:22:00 np0005533938 systemd[1]: libpod-conmon-21dacaeabc5631810955a809d28a044549d0d667e16608595b2ebbeb49883404.scope: Deactivated successfully.
Nov 24 13:22:00 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Nov 24 13:22:00 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3379749381' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 24 13:22:00 np0005533938 awesome_beaver[103340]: mimic
Nov 24 13:22:00 np0005533938 ceph-mon[74927]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 24 13:22:00 np0005533938 ceph-mon[74927]: from='client.? 192.168.122.100:0/2409275648' entity='client.rgw.rgw.compute-0.pecquu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 24 13:22:00 np0005533938 podman[102737]: 2025-11-24 18:22:00.419140514 +0000 UTC m=+0.803171364 container died 1d3cce47d84e6281963fac1a33402ea9fd15932628e93b76e701ff4699cd5354 (image=quay.io/ceph/ceph:v18, name=awesome_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:22:00 np0005533938 systemd[1]: libpod-1d3cce47d84e6281963fac1a33402ea9fd15932628e93b76e701ff4699cd5354.scope: Deactivated successfully.
Nov 24 13:22:00 np0005533938 systemd[1]: var-lib-containers-storage-overlay-7fc6e627d6ee8efa5c4c3ab9ae7b8b325aed826e4aa1daef8191402bb5015dd5-merged.mount: Deactivated successfully.
Nov 24 13:22:00 np0005533938 podman[102737]: 2025-11-24 18:22:00.472979901 +0000 UTC m=+0.857010751 container remove 1d3cce47d84e6281963fac1a33402ea9fd15932628e93b76e701ff4699cd5354 (image=quay.io/ceph/ceph:v18, name=awesome_beaver, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:22:00 np0005533938 systemd[1]: libpod-conmon-1d3cce47d84e6281963fac1a33402ea9fd15932628e93b76e701ff4699cd5354.scope: Deactivated successfully.
Nov 24 13:22:00 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v119: 197 pgs: 197 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 510 B/s wr, 1 op/s
Nov 24 13:22:00 np0005533938 podman[103448]: 2025-11-24 18:22:00.499421677 +0000 UTC m=+0.090062737 container create 6e467a9a41b424baa60a3fe58fd96eea300b4f362b2a92104d3e5b32aa9cdac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 13:22:00 np0005533938 podman[103448]: 2025-11-24 18:22:00.440414852 +0000 UTC m=+0.031055902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:22:00 np0005533938 systemd[1]: Started libpod-conmon-6e467a9a41b424baa60a3fe58fd96eea300b4f362b2a92104d3e5b32aa9cdac0.scope.
Nov 24 13:22:00 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:22:00 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee1ace7c24aff56fc21dcdd268b5464663a9a2978f9dbcfb256b713438dc2aac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:22:00 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee1ace7c24aff56fc21dcdd268b5464663a9a2978f9dbcfb256b713438dc2aac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:22:00 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee1ace7c24aff56fc21dcdd268b5464663a9a2978f9dbcfb256b713438dc2aac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:22:00 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee1ace7c24aff56fc21dcdd268b5464663a9a2978f9dbcfb256b713438dc2aac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:22:00 np0005533938 podman[103448]: 2025-11-24 18:22:00.612245409 +0000 UTC m=+0.202886479 container init 6e467a9a41b424baa60a3fe58fd96eea300b4f362b2a92104d3e5b32aa9cdac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 13:22:00 np0005533938 podman[103448]: 2025-11-24 18:22:00.620609436 +0000 UTC m=+0.211250486 container start 6e467a9a41b424baa60a3fe58fd96eea300b4f362b2a92104d3e5b32aa9cdac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:22:00 np0005533938 podman[103448]: 2025-11-24 18:22:00.623288223 +0000 UTC m=+0.213929273 container attach 6e467a9a41b424baa60a3fe58fd96eea300b4f362b2a92104d3e5b32aa9cdac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mirzakhani, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 13:22:01 np0005533938 ceph-mon[74927]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 24 13:22:01 np0005533938 ceph-mon[74927]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 24 13:22:01 np0005533938 python3[103509]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:22:01 np0005533938 podman[103524]: 2025-11-24 18:22:01.470408407 +0000 UTC m=+0.036747073 container create 8e02ae2236d81cf5d059eed78f6de0a56027e5bfcdda44e561cf8228351791df (image=quay.io/ceph/ceph:v18, name=zealous_shaw, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:22:01 np0005533938 systemd[1]: Started libpod-conmon-8e02ae2236d81cf5d059eed78f6de0a56027e5bfcdda44e561cf8228351791df.scope.
Nov 24 13:22:01 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:22:01 np0005533938 admiring_mirzakhani[103477]: {
Nov 24 13:22:01 np0005533938 admiring_mirzakhani[103477]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:22:01 np0005533938 admiring_mirzakhani[103477]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:22:01 np0005533938 admiring_mirzakhani[103477]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:22:01 np0005533938 admiring_mirzakhani[103477]:        "osd_id": 0,
Nov 24 13:22:01 np0005533938 admiring_mirzakhani[103477]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:22:01 np0005533938 admiring_mirzakhani[103477]:        "type": "bluestore"
Nov 24 13:22:01 np0005533938 admiring_mirzakhani[103477]:    },
Nov 24 13:22:01 np0005533938 admiring_mirzakhani[103477]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:22:01 np0005533938 admiring_mirzakhani[103477]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:22:01 np0005533938 admiring_mirzakhani[103477]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:22:01 np0005533938 admiring_mirzakhani[103477]:        "osd_id": 1,
Nov 24 13:22:01 np0005533938 admiring_mirzakhani[103477]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:22:01 np0005533938 admiring_mirzakhani[103477]:        "type": "bluestore"
Nov 24 13:22:01 np0005533938 admiring_mirzakhani[103477]:    },
Nov 24 13:22:01 np0005533938 admiring_mirzakhani[103477]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:22:01 np0005533938 admiring_mirzakhani[103477]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:22:01 np0005533938 admiring_mirzakhani[103477]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:22:01 np0005533938 admiring_mirzakhani[103477]:        "osd_id": 2,
Nov 24 13:22:01 np0005533938 admiring_mirzakhani[103477]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:22:01 np0005533938 admiring_mirzakhani[103477]:        "type": "bluestore"
Nov 24 13:22:01 np0005533938 admiring_mirzakhani[103477]:    }
Nov 24 13:22:01 np0005533938 admiring_mirzakhani[103477]: }
Nov 24 13:22:01 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196c9bd2700d9801eabe7f3578a0e9e8d6d30430dcec4e7e261fd7509ac18aa4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:22:01 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196c9bd2700d9801eabe7f3578a0e9e8d6d30430dcec4e7e261fd7509ac18aa4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:22:01 np0005533938 podman[103524]: 2025-11-24 18:22:01.546870176 +0000 UTC m=+0.113209122 container init 8e02ae2236d81cf5d059eed78f6de0a56027e5bfcdda44e561cf8228351791df (image=quay.io/ceph/ceph:v18, name=zealous_shaw, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:22:01 np0005533938 podman[103524]: 2025-11-24 18:22:01.454689367 +0000 UTC m=+0.021028063 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:22:01 np0005533938 podman[103524]: 2025-11-24 18:22:01.55225261 +0000 UTC m=+0.118591276 container start 8e02ae2236d81cf5d059eed78f6de0a56027e5bfcdda44e561cf8228351791df (image=quay.io/ceph/ceph:v18, name=zealous_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 13:22:01 np0005533938 podman[103524]: 2025-11-24 18:22:01.555592593 +0000 UTC m=+0.121931289 container attach 8e02ae2236d81cf5d059eed78f6de0a56027e5bfcdda44e561cf8228351791df (image=quay.io/ceph/ceph:v18, name=zealous_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:22:01 np0005533938 systemd[1]: libpod-6e467a9a41b424baa60a3fe58fd96eea300b4f362b2a92104d3e5b32aa9cdac0.scope: Deactivated successfully.
Nov 24 13:22:01 np0005533938 podman[103448]: 2025-11-24 18:22:01.570549634 +0000 UTC m=+1.161190684 container died 6e467a9a41b424baa60a3fe58fd96eea300b4f362b2a92104d3e5b32aa9cdac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 13:22:01 np0005533938 systemd[1]: var-lib-containers-storage-overlay-ee1ace7c24aff56fc21dcdd268b5464663a9a2978f9dbcfb256b713438dc2aac-merged.mount: Deactivated successfully.
Nov 24 13:22:01 np0005533938 podman[103448]: 2025-11-24 18:22:01.619421507 +0000 UTC m=+1.210062557 container remove 6e467a9a41b424baa60a3fe58fd96eea300b4f362b2a92104d3e5b32aa9cdac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mirzakhani, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 13:22:01 np0005533938 systemd[1]: libpod-conmon-6e467a9a41b424baa60a3fe58fd96eea300b4f362b2a92104d3e5b32aa9cdac0.scope: Deactivated successfully.
Nov 24 13:22:01 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:22:01 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:22:01 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:22:01 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:22:01 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 24220f0a-b208-4327-bf21-5e2fd9de5ee2 does not exist
Nov 24 13:22:01 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 705d2717-c736-4aff-994c-68f975b5dab1 does not exist
Nov 24 13:22:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Nov 24 13:22:02 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2248733845' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 24 13:22:02 np0005533938 zealous_shaw[103549]: 
Nov 24 13:22:02 np0005533938 zealous_shaw[103549]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"rgw":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":7}}
Nov 24 13:22:02 np0005533938 systemd[1]: libpod-8e02ae2236d81cf5d059eed78f6de0a56027e5bfcdda44e561cf8228351791df.scope: Deactivated successfully.
Nov 24 13:22:02 np0005533938 podman[103524]: 2025-11-24 18:22:02.166631505 +0000 UTC m=+0.732970171 container died 8e02ae2236d81cf5d059eed78f6de0a56027e5bfcdda44e561cf8228351791df (image=quay.io/ceph/ceph:v18, name=zealous_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:22:02 np0005533938 systemd[1]: var-lib-containers-storage-overlay-196c9bd2700d9801eabe7f3578a0e9e8d6d30430dcec4e7e261fd7509ac18aa4-merged.mount: Deactivated successfully.
Nov 24 13:22:02 np0005533938 podman[103524]: 2025-11-24 18:22:02.212852513 +0000 UTC m=+0.779191179 container remove 8e02ae2236d81cf5d059eed78f6de0a56027e5bfcdda44e561cf8228351791df (image=quay.io/ceph/ceph:v18, name=zealous_shaw, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:22:02 np0005533938 systemd[1]: libpod-conmon-8e02ae2236d81cf5d059eed78f6de0a56027e5bfcdda44e561cf8228351791df.scope: Deactivated successfully.
Nov 24 13:22:02 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.e deep-scrub starts
Nov 24 13:22:02 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.e deep-scrub ok
Nov 24 13:22:02 np0005533938 podman[103821]: 2025-11-24 18:22:02.385846918 +0000 UTC m=+0.045053649 container exec 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:22:02 np0005533938 ceph-mon[74927]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 24 13:22:02 np0005533938 ceph-mon[74927]: Cluster is now healthy
Nov 24 13:22:02 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:22:02 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:22:02 np0005533938 podman[103821]: 2025-11-24 18:22:02.471185697 +0000 UTC m=+0.130392408 container exec_died 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:22:02 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v120: 197 pgs: 197 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 341 B/s wr, 1 op/s
Nov 24 13:22:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:22:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:22:02 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:22:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:22:02 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:22:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:22:02 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:22:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:22:02 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:22:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:22:02 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:22:02 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 319dabf3-d8dc-4b35-b62d-57c000554cb7 does not exist
Nov 24 13:22:02 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 69b7fd25-1614-481b-a45c-ef5394feb223 does not exist
Nov 24 13:22:02 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev fa82b585-d31d-4323-b445-55e90ea665da does not exist
Nov 24 13:22:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:22:02 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:22:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:22:02 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:22:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:22:02 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:22:03 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 24 13:22:03 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 24 13:22:03 np0005533938 podman[104120]: 2025-11-24 18:22:03.53410158 +0000 UTC m=+0.039455260 container create 11bdf3e6e8af43f825819675f18003f83216311b54ec8b193356b754df08b431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_darwin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 24 13:22:03 np0005533938 systemd[1]: Started libpod-conmon-11bdf3e6e8af43f825819675f18003f83216311b54ec8b193356b754df08b431.scope.
Nov 24 13:22:03 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:22:03 np0005533938 podman[104120]: 2025-11-24 18:22:03.582757729 +0000 UTC m=+0.088111379 container init 11bdf3e6e8af43f825819675f18003f83216311b54ec8b193356b754df08b431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 13:22:03 np0005533938 podman[104120]: 2025-11-24 18:22:03.59287274 +0000 UTC m=+0.098226380 container start 11bdf3e6e8af43f825819675f18003f83216311b54ec8b193356b754df08b431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 13:22:03 np0005533938 practical_darwin[104137]: 167 167
Nov 24 13:22:03 np0005533938 systemd[1]: libpod-11bdf3e6e8af43f825819675f18003f83216311b54ec8b193356b754df08b431.scope: Deactivated successfully.
Nov 24 13:22:03 np0005533938 podman[104120]: 2025-11-24 18:22:03.59612203 +0000 UTC m=+0.101475670 container attach 11bdf3e6e8af43f825819675f18003f83216311b54ec8b193356b754df08b431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_darwin, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 13:22:03 np0005533938 podman[104120]: 2025-11-24 18:22:03.596674544 +0000 UTC m=+0.102028184 container died 11bdf3e6e8af43f825819675f18003f83216311b54ec8b193356b754df08b431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:22:03 np0005533938 podman[104120]: 2025-11-24 18:22:03.516701648 +0000 UTC m=+0.022055318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:22:03 np0005533938 systemd[1]: var-lib-containers-storage-overlay-7789a01009d44a20a807c79b85c4fa7713789aedc56b721818b64bebd2c1f055-merged.mount: Deactivated successfully.
Nov 24 13:22:03 np0005533938 podman[104120]: 2025-11-24 18:22:03.628788051 +0000 UTC m=+0.134141691 container remove 11bdf3e6e8af43f825819675f18003f83216311b54ec8b193356b754df08b431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 13:22:03 np0005533938 systemd[1]: libpod-conmon-11bdf3e6e8af43f825819675f18003f83216311b54ec8b193356b754df08b431.scope: Deactivated successfully.
Nov 24 13:22:03 np0005533938 podman[104162]: 2025-11-24 18:22:03.781742069 +0000 UTC m=+0.037229055 container create 491ec1db538cd1e8468b1c36977b87634a67e67b1f05affc23eb640a8726329e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 13:22:03 np0005533938 systemd[1]: Started libpod-conmon-491ec1db538cd1e8468b1c36977b87634a67e67b1f05affc23eb640a8726329e.scope.
Nov 24 13:22:03 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:22:03 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/891197beb9b38e30f44e080c1a17f561982d626e64d097e5c97b17bc807cffc1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:22:03 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/891197beb9b38e30f44e080c1a17f561982d626e64d097e5c97b17bc807cffc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:22:03 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/891197beb9b38e30f44e080c1a17f561982d626e64d097e5c97b17bc807cffc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:22:03 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/891197beb9b38e30f44e080c1a17f561982d626e64d097e5c97b17bc807cffc1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:22:03 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/891197beb9b38e30f44e080c1a17f561982d626e64d097e5c97b17bc807cffc1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:22:03 np0005533938 podman[104162]: 2025-11-24 18:22:03.853667495 +0000 UTC m=+0.109154531 container init 491ec1db538cd1e8468b1c36977b87634a67e67b1f05affc23eb640a8726329e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mcclintock, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:22:03 np0005533938 podman[104162]: 2025-11-24 18:22:03.859114171 +0000 UTC m=+0.114601157 container start 491ec1db538cd1e8468b1c36977b87634a67e67b1f05affc23eb640a8726329e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mcclintock, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:22:03 np0005533938 podman[104162]: 2025-11-24 18:22:03.764528652 +0000 UTC m=+0.020015658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:22:03 np0005533938 podman[104162]: 2025-11-24 18:22:03.861635413 +0000 UTC m=+0.117122399 container attach 491ec1db538cd1e8468b1c36977b87634a67e67b1f05affc23eb640a8726329e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mcclintock, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 13:22:03 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Nov 24 13:22:03 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Nov 24 13:22:03 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:22:03 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:22:03 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:22:03 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:22:03 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:22:04 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 24 13:22:04 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 24 13:22:04 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 24 13:22:04 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 24 13:22:04 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v121: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 4.9 KiB/s wr, 157 op/s
Nov 24 13:22:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:22:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:22:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:22:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:22:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:22:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:22:04 np0005533938 nostalgic_mcclintock[104178]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:22:04 np0005533938 nostalgic_mcclintock[104178]: --> relative data size: 1.0
Nov 24 13:22:04 np0005533938 nostalgic_mcclintock[104178]: --> All data devices are unavailable
Nov 24 13:22:04 np0005533938 systemd[1]: libpod-491ec1db538cd1e8468b1c36977b87634a67e67b1f05affc23eb640a8726329e.scope: Deactivated successfully.
Nov 24 13:22:04 np0005533938 podman[104162]: 2025-11-24 18:22:04.838360175 +0000 UTC m=+1.093847171 container died 491ec1db538cd1e8468b1c36977b87634a67e67b1f05affc23eb640a8726329e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 13:22:04 np0005533938 systemd[1]: var-lib-containers-storage-overlay-891197beb9b38e30f44e080c1a17f561982d626e64d097e5c97b17bc807cffc1-merged.mount: Deactivated successfully.
Nov 24 13:22:04 np0005533938 podman[104162]: 2025-11-24 18:22:04.887761512 +0000 UTC m=+1.143248498 container remove 491ec1db538cd1e8468b1c36977b87634a67e67b1f05affc23eb640a8726329e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 13:22:04 np0005533938 systemd[1]: libpod-conmon-491ec1db538cd1e8468b1c36977b87634a67e67b1f05affc23eb640a8726329e.scope: Deactivated successfully.
Nov 24 13:22:04 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.17 deep-scrub starts
Nov 24 13:22:04 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.17 deep-scrub ok
Nov 24 13:22:05 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 24 13:22:05 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 24 13:22:05 np0005533938 podman[104360]: 2025-11-24 18:22:05.441350288 +0000 UTC m=+0.038464116 container create 2222d249e3cd850265bdeb4a110bd41451ceb9dcf3fedd8bba43f9137ad77760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:22:05 np0005533938 systemd[1]: Started libpod-conmon-2222d249e3cd850265bdeb4a110bd41451ceb9dcf3fedd8bba43f9137ad77760.scope.
Nov 24 13:22:05 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:22:05 np0005533938 podman[104360]: 2025-11-24 18:22:05.504771193 +0000 UTC m=+0.101885041 container init 2222d249e3cd850265bdeb4a110bd41451ceb9dcf3fedd8bba43f9137ad77760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_kilby, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 13:22:05 np0005533938 podman[104360]: 2025-11-24 18:22:05.513201472 +0000 UTC m=+0.110315300 container start 2222d249e3cd850265bdeb4a110bd41451ceb9dcf3fedd8bba43f9137ad77760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 13:22:05 np0005533938 priceless_kilby[104376]: 167 167
Nov 24 13:22:05 np0005533938 systemd[1]: libpod-2222d249e3cd850265bdeb4a110bd41451ceb9dcf3fedd8bba43f9137ad77760.scope: Deactivated successfully.
Nov 24 13:22:05 np0005533938 podman[104360]: 2025-11-24 18:22:05.516798551 +0000 UTC m=+0.113912409 container attach 2222d249e3cd850265bdeb4a110bd41451ceb9dcf3fedd8bba43f9137ad77760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_kilby, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 13:22:05 np0005533938 podman[104360]: 2025-11-24 18:22:05.517506849 +0000 UTC m=+0.114620687 container died 2222d249e3cd850265bdeb4a110bd41451ceb9dcf3fedd8bba43f9137ad77760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:22:05 np0005533938 podman[104360]: 2025-11-24 18:22:05.426576191 +0000 UTC m=+0.023690039 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:22:05 np0005533938 systemd[1]: var-lib-containers-storage-overlay-ec77639aebfd9ce19a2c34cfcc6154e2a7fc005fc11ac66e8ee6548d62960402-merged.mount: Deactivated successfully.
Nov 24 13:22:05 np0005533938 podman[104360]: 2025-11-24 18:22:05.551013981 +0000 UTC m=+0.148127809 container remove 2222d249e3cd850265bdeb4a110bd41451ceb9dcf3fedd8bba43f9137ad77760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_kilby, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 13:22:05 np0005533938 systemd[1]: libpod-conmon-2222d249e3cd850265bdeb4a110bd41451ceb9dcf3fedd8bba43f9137ad77760.scope: Deactivated successfully.
Nov 24 13:22:05 np0005533938 podman[104399]: 2025-11-24 18:22:05.696491123 +0000 UTC m=+0.040468646 container create 5cfe904c5cf2a132ff605439934611cc6e13d0d9f5d0fb6b1f0e1e2a227707fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_johnson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 13:22:05 np0005533938 systemd[1]: Started libpod-conmon-5cfe904c5cf2a132ff605439934611cc6e13d0d9f5d0fb6b1f0e1e2a227707fe.scope.
Nov 24 13:22:05 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:22:05 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b57294ab0705f143dbee9f54202995fc50e674a8770616e5caff80701a1e686/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:22:05 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b57294ab0705f143dbee9f54202995fc50e674a8770616e5caff80701a1e686/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:22:05 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b57294ab0705f143dbee9f54202995fc50e674a8770616e5caff80701a1e686/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:22:05 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b57294ab0705f143dbee9f54202995fc50e674a8770616e5caff80701a1e686/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:22:05 np0005533938 podman[104399]: 2025-11-24 18:22:05.678711022 +0000 UTC m=+0.022688535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:22:05 np0005533938 podman[104399]: 2025-11-24 18:22:05.779064833 +0000 UTC m=+0.123042366 container init 5cfe904c5cf2a132ff605439934611cc6e13d0d9f5d0fb6b1f0e1e2a227707fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_johnson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 13:22:05 np0005533938 podman[104399]: 2025-11-24 18:22:05.784851057 +0000 UTC m=+0.128828570 container start 5cfe904c5cf2a132ff605439934611cc6e13d0d9f5d0fb6b1f0e1e2a227707fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_johnson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:22:05 np0005533938 podman[104399]: 2025-11-24 18:22:05.787687197 +0000 UTC m=+0.131664710 container attach 5cfe904c5cf2a132ff605439934611cc6e13d0d9f5d0fb6b1f0e1e2a227707fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_johnson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 24 13:22:05 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 24 13:22:05 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 24 13:22:06 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v122: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 4.2 KiB/s wr, 138 op/s
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]: {
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:    "0": [
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:        {
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "devices": [
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "/dev/loop3"
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            ],
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "lv_name": "ceph_lv0",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "lv_size": "21470642176",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "name": "ceph_lv0",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "tags": {
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.cluster_name": "ceph",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.crush_device_class": "",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.encrypted": "0",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.osd_id": "0",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.type": "block",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.vdo": "0"
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            },
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "type": "block",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "vg_name": "ceph_vg0"
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:        }
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:    ],
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:    "1": [
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:        {
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "devices": [
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "/dev/loop4"
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            ],
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "lv_name": "ceph_lv1",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "lv_size": "21470642176",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "name": "ceph_lv1",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "tags": {
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.cluster_name": "ceph",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.crush_device_class": "",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.encrypted": "0",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.osd_id": "1",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.type": "block",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.vdo": "0"
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            },
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "type": "block",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "vg_name": "ceph_vg1"
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:        }
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:    ],
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:    "2": [
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:        {
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "devices": [
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "/dev/loop5"
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            ],
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "lv_name": "ceph_lv2",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "lv_size": "21470642176",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "name": "ceph_lv2",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "tags": {
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.cluster_name": "ceph",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.crush_device_class": "",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.encrypted": "0",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.osd_id": "2",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.type": "block",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:                "ceph.vdo": "0"
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            },
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "type": "block",
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:            "vg_name": "ceph_vg2"
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:        }
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]:    ]
Nov 24 13:22:06 np0005533938 wizardly_johnson[104415]: }
Nov 24 13:22:06 np0005533938 systemd[1]: libpod-5cfe904c5cf2a132ff605439934611cc6e13d0d9f5d0fb6b1f0e1e2a227707fe.scope: Deactivated successfully.
Nov 24 13:22:06 np0005533938 podman[104399]: 2025-11-24 18:22:06.565012769 +0000 UTC m=+0.908990282 container died 5cfe904c5cf2a132ff605439934611cc6e13d0d9f5d0fb6b1f0e1e2a227707fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 13:22:06 np0005533938 systemd[1]: var-lib-containers-storage-overlay-6b57294ab0705f143dbee9f54202995fc50e674a8770616e5caff80701a1e686-merged.mount: Deactivated successfully.
Nov 24 13:22:06 np0005533938 podman[104399]: 2025-11-24 18:22:06.627588153 +0000 UTC m=+0.971565656 container remove 5cfe904c5cf2a132ff605439934611cc6e13d0d9f5d0fb6b1f0e1e2a227707fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 13:22:06 np0005533938 systemd[1]: libpod-conmon-5cfe904c5cf2a132ff605439934611cc6e13d0d9f5d0fb6b1f0e1e2a227707fe.scope: Deactivated successfully.
Nov 24 13:22:07 np0005533938 podman[104578]: 2025-11-24 18:22:07.237052006 +0000 UTC m=+0.039497612 container create 56cdd84248b29d20a7d8f757875ec87fb0fce1fdf5807b0fd73e26a844ab6144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ride, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 13:22:07 np0005533938 systemd[1]: Started libpod-conmon-56cdd84248b29d20a7d8f757875ec87fb0fce1fdf5807b0fd73e26a844ab6144.scope.
Nov 24 13:22:07 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:22:07 np0005533938 podman[104578]: 2025-11-24 18:22:07.31253126 +0000 UTC m=+0.114976876 container init 56cdd84248b29d20a7d8f757875ec87fb0fce1fdf5807b0fd73e26a844ab6144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ride, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:22:07 np0005533938 podman[104578]: 2025-11-24 18:22:07.219701275 +0000 UTC m=+0.022146901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:22:07 np0005533938 podman[104578]: 2025-11-24 18:22:07.324510758 +0000 UTC m=+0.126956364 container start 56cdd84248b29d20a7d8f757875ec87fb0fce1fdf5807b0fd73e26a844ab6144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ride, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:22:07 np0005533938 podman[104578]: 2025-11-24 18:22:07.327454171 +0000 UTC m=+0.129899797 container attach 56cdd84248b29d20a7d8f757875ec87fb0fce1fdf5807b0fd73e26a844ab6144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 13:22:07 np0005533938 nostalgic_ride[104595]: 167 167
Nov 24 13:22:07 np0005533938 systemd[1]: libpod-56cdd84248b29d20a7d8f757875ec87fb0fce1fdf5807b0fd73e26a844ab6144.scope: Deactivated successfully.
Nov 24 13:22:07 np0005533938 podman[104578]: 2025-11-24 18:22:07.329635675 +0000 UTC m=+0.132081351 container died 56cdd84248b29d20a7d8f757875ec87fb0fce1fdf5807b0fd73e26a844ab6144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ride, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 24 13:22:07 np0005533938 systemd[1]: var-lib-containers-storage-overlay-a52b6c68bf98c5bed5d2998a600a4f750315942c2f3abae2c590a79a097b3003-merged.mount: Deactivated successfully.
Nov 24 13:22:07 np0005533938 podman[104578]: 2025-11-24 18:22:07.376488869 +0000 UTC m=+0.178934475 container remove 56cdd84248b29d20a7d8f757875ec87fb0fce1fdf5807b0fd73e26a844ab6144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:22:07 np0005533938 systemd[1]: libpod-conmon-56cdd84248b29d20a7d8f757875ec87fb0fce1fdf5807b0fd73e26a844ab6144.scope: Deactivated successfully.
Nov 24 13:22:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:22:07 np0005533938 podman[104618]: 2025-11-24 18:22:07.537839364 +0000 UTC m=+0.049716434 container create 12145b4d01b1884d5a751a0c1f1cea526b6e8fe860f9f6d0adf406afcf0d4230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 13:22:07 np0005533938 systemd[1]: Started libpod-conmon-12145b4d01b1884d5a751a0c1f1cea526b6e8fe860f9f6d0adf406afcf0d4230.scope.
Nov 24 13:22:07 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:22:07 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e359084bbc932d8d2aa0b6d1257eb33832f937fb96c97d93483e976fbad258f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:22:07 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e359084bbc932d8d2aa0b6d1257eb33832f937fb96c97d93483e976fbad258f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:22:07 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e359084bbc932d8d2aa0b6d1257eb33832f937fb96c97d93483e976fbad258f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:22:07 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e359084bbc932d8d2aa0b6d1257eb33832f937fb96c97d93483e976fbad258f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:22:07 np0005533938 podman[104618]: 2025-11-24 18:22:07.597791253 +0000 UTC m=+0.109668323 container init 12145b4d01b1884d5a751a0c1f1cea526b6e8fe860f9f6d0adf406afcf0d4230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:22:07 np0005533938 podman[104618]: 2025-11-24 18:22:07.607113454 +0000 UTC m=+0.118990524 container start 12145b4d01b1884d5a751a0c1f1cea526b6e8fe860f9f6d0adf406afcf0d4230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ardinghelli, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:22:07 np0005533938 podman[104618]: 2025-11-24 18:22:07.514886074 +0000 UTC m=+0.026763234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:22:07 np0005533938 podman[104618]: 2025-11-24 18:22:07.611040632 +0000 UTC m=+0.122917702 container attach 12145b4d01b1884d5a751a0c1f1cea526b6e8fe860f9f6d0adf406afcf0d4230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 13:22:08 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v123: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.4 KiB/s wr, 110 op/s
Nov 24 13:22:08 np0005533938 fervent_ardinghelli[104635]: {
Nov 24 13:22:08 np0005533938 fervent_ardinghelli[104635]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:22:08 np0005533938 fervent_ardinghelli[104635]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:22:08 np0005533938 fervent_ardinghelli[104635]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:22:08 np0005533938 fervent_ardinghelli[104635]:        "osd_id": 0,
Nov 24 13:22:08 np0005533938 fervent_ardinghelli[104635]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:22:08 np0005533938 fervent_ardinghelli[104635]:        "type": "bluestore"
Nov 24 13:22:08 np0005533938 fervent_ardinghelli[104635]:    },
Nov 24 13:22:08 np0005533938 fervent_ardinghelli[104635]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:22:08 np0005533938 fervent_ardinghelli[104635]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:22:08 np0005533938 fervent_ardinghelli[104635]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:22:08 np0005533938 fervent_ardinghelli[104635]:        "osd_id": 1,
Nov 24 13:22:08 np0005533938 fervent_ardinghelli[104635]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:22:08 np0005533938 fervent_ardinghelli[104635]:        "type": "bluestore"
Nov 24 13:22:08 np0005533938 fervent_ardinghelli[104635]:    },
Nov 24 13:22:08 np0005533938 fervent_ardinghelli[104635]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:22:08 np0005533938 fervent_ardinghelli[104635]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:22:08 np0005533938 fervent_ardinghelli[104635]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:22:08 np0005533938 fervent_ardinghelli[104635]:        "osd_id": 2,
Nov 24 13:22:08 np0005533938 fervent_ardinghelli[104635]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:22:08 np0005533938 fervent_ardinghelli[104635]:        "type": "bluestore"
Nov 24 13:22:08 np0005533938 fervent_ardinghelli[104635]:    }
Nov 24 13:22:08 np0005533938 fervent_ardinghelli[104635]: }
Nov 24 13:22:08 np0005533938 systemd[1]: libpod-12145b4d01b1884d5a751a0c1f1cea526b6e8fe860f9f6d0adf406afcf0d4230.scope: Deactivated successfully.
Nov 24 13:22:08 np0005533938 podman[104618]: 2025-11-24 18:22:08.601369232 +0000 UTC m=+1.113246362 container died 12145b4d01b1884d5a751a0c1f1cea526b6e8fe860f9f6d0adf406afcf0d4230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 24 13:22:08 np0005533938 systemd[1]: libpod-12145b4d01b1884d5a751a0c1f1cea526b6e8fe860f9f6d0adf406afcf0d4230.scope: Consumed 1.001s CPU time.
Nov 24 13:22:08 np0005533938 systemd[1]: var-lib-containers-storage-overlay-e359084bbc932d8d2aa0b6d1257eb33832f937fb96c97d93483e976fbad258f3-merged.mount: Deactivated successfully.
Nov 24 13:22:08 np0005533938 podman[104618]: 2025-11-24 18:22:08.656376628 +0000 UTC m=+1.168253698 container remove 12145b4d01b1884d5a751a0c1f1cea526b6e8fe860f9f6d0adf406afcf0d4230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 13:22:08 np0005533938 systemd[1]: libpod-conmon-12145b4d01b1884d5a751a0c1f1cea526b6e8fe860f9f6d0adf406afcf0d4230.scope: Deactivated successfully.
Nov 24 13:22:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:22:08 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:22:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:22:08 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:22:08 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 43dd4794-471b-46d2-bd04-42eff94bb194 does not exist
Nov 24 13:22:08 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 83e476f0-3e38-4402-9bdc-02b40e2eb4a9 does not exist
Nov 24 13:22:08 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.19 deep-scrub starts
Nov 24 13:22:08 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.19 deep-scrub ok
Nov 24 13:22:09 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Nov 24 13:22:09 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Nov 24 13:22:09 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:22:09 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:22:10 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.1a deep-scrub starts
Nov 24 13:22:10 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.1a deep-scrub ok
Nov 24 13:22:10 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v124: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.9 KiB/s wr, 99 op/s
Nov 24 13:22:10 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Nov 24 13:22:10 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Nov 24 13:22:11 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 24 13:22:11 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 24 13:22:11 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 24 13:22:11 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 24 13:22:12 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.1e deep-scrub starts
Nov 24 13:22:12 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 2.1e deep-scrub ok
Nov 24 13:22:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:22:12 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v125: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 91 op/s
Nov 24 13:22:12 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 24 13:22:12 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 24 13:22:13 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Nov 24 13:22:13 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Nov 24 13:22:14 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v126: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.7 KiB/s wr, 91 op/s
Nov 24 13:22:14 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.7 deep-scrub starts
Nov 24 13:22:14 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.7 deep-scrub ok
Nov 24 13:22:14 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 24 13:22:14 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 24 13:22:15 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Nov 24 13:22:15 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Nov 24 13:22:15 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 24 13:22:15 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 24 13:22:16 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v127: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:22:16 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.b deep-scrub starts
Nov 24 13:22:16 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.b deep-scrub ok
Nov 24 13:22:16 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Nov 24 13:22:16 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Nov 24 13:22:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:22:17 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 24 13:22:17 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 24 13:22:17 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Nov 24 13:22:17 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Nov 24 13:22:18 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v128: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:22:19 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Nov 24 13:22:19 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Nov 24 13:22:20 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v129: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:22:21 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 24 13:22:21 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 24 13:22:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:22:22 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v130: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:22:23 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 24 13:22:23 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 24 13:22:23 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Nov 24 13:22:23 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Nov 24 13:22:24 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 24 13:22:24 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 24 13:22:24 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v131: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:22:24 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.12 deep-scrub starts
Nov 24 13:22:24 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.12 deep-scrub ok
Nov 24 13:22:25 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 24 13:22:25 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 24 13:22:25 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.18 deep-scrub starts
Nov 24 13:22:25 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.18 deep-scrub ok
Nov 24 13:22:26 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v132: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:22:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:22:27 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Nov 24 13:22:27 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Nov 24 13:22:28 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v133: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:22:29 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.d scrub starts
Nov 24 13:22:29 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.d scrub ok
Nov 24 13:22:29 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 24 13:22:29 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 24 13:22:30 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 24 13:22:30 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 24 13:22:30 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v134: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:22:30 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.17 deep-scrub starts
Nov 24 13:22:30 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.17 deep-scrub ok
Nov 24 13:22:30 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Nov 24 13:22:30 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Nov 24 13:22:32 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Nov 24 13:22:32 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Nov 24 13:22:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:22:32 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v135: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:22:33 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Nov 24 13:22:33 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Nov 24 13:22:34 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 24 13:22:34 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 24 13:22:34 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v136: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:22:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:22:34
Nov 24 13:22:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:22:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:22:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'volumes', 'default.rgw.log', 'default.rgw.control', 'vms', 'backups', 'default.rgw.meta', 'images']
Nov 24 13:22:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:22:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:22:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:22:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:22:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:22:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:22:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:22:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:22:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:22:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:22:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:22:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:22:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:22:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:22:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:22:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:22:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:22:35 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 24 13:22:35 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 24 13:22:36 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v137: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:22:36 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 24 13:22:36 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 24 13:22:36 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.1b deep-scrub starts
Nov 24 13:22:36 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.1b deep-scrub ok
Nov 24 13:22:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:22:37 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 24 13:22:37 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 24 13:22:37 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 24 13:22:37 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 24 13:22:38 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v138: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:22:38 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.19 deep-scrub starts
Nov 24 13:22:38 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.19 deep-scrub ok
Nov 24 13:22:39 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:22:39 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:22:39 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:22:39 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:22:39 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:22:39 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:22:39 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:22:39 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:22:39 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:22:39 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:22:39 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:22:39 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:22:39 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:22:39 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:22:39 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:22:39 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:22:39 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Nov 24 13:22:39 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:22:39 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Nov 24 13:22:39 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:22:39 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 13:22:39 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:22:39 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Nov 24 13:22:39 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 13:22:39 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 13:22:39 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Nov 24 13:22:39 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Nov 24 13:22:39 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 24 13:22:39 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 24 13:22:39 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 24 13:22:39 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 24 13:22:39 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 24 13:22:39 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 13:22:39 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 24 13:22:39 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 24 13:22:39 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 24 13:22:39 np0005533938 ceph-mgr[75218]: [progress INFO root] update: starting ev 30905073-f488-4412-b1f6-76b8a4219cbb (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 24 13:22:39 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 13:22:39 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 13:22:40 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v140: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:22:40 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 13:22:40 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 13:22:40 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 24 13:22:40 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 24 13:22:40 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 13:22:40 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 24 13:22:40 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 24 13:22:40 np0005533938 ceph-mgr[75218]: [progress INFO root] update: starting ev c47872ff-514b-42a8-89af-e3703917dc10 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 24 13:22:40 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 13:22:40 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 13:22:40 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 24 13:22:40 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 13:22:40 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 13:22:41 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.14 deep-scrub starts
Nov 24 13:22:41 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.14 deep-scrub ok
Nov 24 13:22:41 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 24 13:22:41 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 24 13:22:41 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 24 13:22:41 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 24 13:22:41 np0005533938 ceph-mgr[75218]: [progress INFO root] update: starting ev d697af87-958c-4d1a-ba59-ba26ca059987 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 24 13:22:41 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 24 13:22:41 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 13:22:41 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 24 13:22:41 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 13:22:41 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 13:22:41 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 24 13:22:41 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 24 13:22:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:22:42 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v143: 228 pgs: 31 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:22:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 13:22:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 13:22:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 13:22:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 57 pg[8.0( v 48'4 (0'0,48'4] local-lis/les=47/48 n=4 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=13.580782890s) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 48'3 mlcod 48'3 active pruub 135.933990479s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.0( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=13.580782890s) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 48'3 mlcod 0'0 unknown pruub 135.933990479s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.5( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.2( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.e( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.f( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.11( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.13( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.4( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.12( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.1( v 48'4 (0'0,48'4] local-lis/les=47/48 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.10( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.7( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.9( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.8( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.3( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.19( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.14( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.a( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.16( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.c( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.d( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.17( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.1a( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.b( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.18( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.15( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.1b( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.6( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.1c( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.1d( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.1e( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 58 pg[8.1f( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 24 13:22:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 24 13:22:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 24 13:22:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 13:22:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 13:22:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 24 13:22:42 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 24 13:22:42 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 24 13:22:42 np0005533938 ceph-mgr[75218]: [progress INFO root] update: starting ev 6c278b28-0bab-4e25-b4c2-6f4165c1702c (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 24 13:22:42 np0005533938 ceph-mgr[75218]: [progress INFO root] complete: finished ev 30905073-f488-4412-b1f6-76b8a4219cbb (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 24 13:22:42 np0005533938 ceph-mgr[75218]: [progress INFO root] Completed event 30905073-f488-4412-b1f6-76b8a4219cbb (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Nov 24 13:22:42 np0005533938 ceph-mgr[75218]: [progress INFO root] complete: finished ev c47872ff-514b-42a8-89af-e3703917dc10 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 24 13:22:42 np0005533938 ceph-mgr[75218]: [progress INFO root] Completed event c47872ff-514b-42a8-89af-e3703917dc10 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Nov 24 13:22:42 np0005533938 ceph-mgr[75218]: [progress INFO root] complete: finished ev d697af87-958c-4d1a-ba59-ba26ca059987 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 24 13:22:42 np0005533938 ceph-mgr[75218]: [progress INFO root] Completed event d697af87-958c-4d1a-ba59-ba26ca059987 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Nov 24 13:22:42 np0005533938 ceph-mgr[75218]: [progress INFO root] complete: finished ev 6c278b28-0bab-4e25-b4c2-6f4165c1702c (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 24 13:22:42 np0005533938 ceph-mgr[75218]: [progress INFO root] Completed event 6c278b28-0bab-4e25-b4c2-6f4165c1702c (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[9.0( v 55'385 (0'0,55'385] local-lis/les=49/50 n=177 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=15.371401787s) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 55'384 mlcod 55'384 active pruub 137.947372437s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:42 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 13:22:42 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 13:22:42 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 24 13:22:42 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 13:22:42 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 13:22:42 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 59 pg[10.0( v 52'16 (0'0,52'16] local-lis/les=51/52 n=8 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=9.499588013s) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 52'15 mlcod 52'15 active pruub 124.561424255s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.14( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.16( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[9.0( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=15.371401787s) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 55'384 mlcod 0'0 unknown pruub 137.947372437s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 59 pg[10.0( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=9.499588013s) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 52'15 mlcod 0'0 unknown pruub 124.561424255s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.17( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.10( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.15( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.3( v 48'4 (0'0,48'4] local-lis/les=57/59 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.1( v 48'4 (0'0,48'4] local-lis/les=57/59 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.c( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.e( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.2( v 48'4 (0'0,48'4] local-lis/les=57/59 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.d( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.8( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.b( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.f( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.9( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.0( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 48'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.6( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.5( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.a( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.1b( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.7( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.19( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.1f( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.18( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.4( v 48'4 (0'0,48'4] local-lis/les=57/59 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.1e( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.1d( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.1c( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.13( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.12( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.1a( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:42 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 59 pg[8.11( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 24 13:22:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 24 13:22:43 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.15( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.14( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.17( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.16( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1e( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.d( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1b( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.a( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.b( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.13( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.12( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.11( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.11( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.3( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.2( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.10( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1f( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1d( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1c( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.d( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1a( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.19( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.18( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.7( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.c( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.6( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.9( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.8( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.5( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.f( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.4( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.9( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.f( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.c( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1( v 52'16 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.e( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.3( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.e( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.a( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.14( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.b( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.8( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.15( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.2( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.16( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.17( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.6( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.7( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.4( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1a( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.5( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.18( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.19( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1c( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1e( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1f( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1d( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.12( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.13( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.10( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1b( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.14( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1b( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.b( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.a( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.12( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.13( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1f( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.11( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1d( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1c( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.10( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1a( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.18( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.19( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.7( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.d( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.6( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.8( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.5( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1e( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.f( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.9( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.c( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.0( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 52'15 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.1( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.e( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.15( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.3( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.14( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.16( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.4( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.11( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.0( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 55'384 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.d( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.17( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.9( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 60 pg[10.2( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.3( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.2( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.a( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.b( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1a( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.5( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.4( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.12( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1d( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.1b( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.10( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 60 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:44 np0005533938 python3[104755]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:22:44 np0005533938 podman[104756]: 2025-11-24 18:22:44.265971468 +0000 UTC m=+0.042157219 container create 5a4bdd43126812557beb0f0406bf08cd1c825fc83237268e118e83694d1c65f6 (image=quay.io/ceph/ceph:v18, name=mystifying_meitner, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 13:22:44 np0005533938 systemd[1]: Started libpod-conmon-5a4bdd43126812557beb0f0406bf08cd1c825fc83237268e118e83694d1c65f6.scope.
Nov 24 13:22:44 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:22:44 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd176ebf21184e910650fa7eaa69474aef7f7eee92f2a833c9327e4418d2a00/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:22:44 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd176ebf21184e910650fa7eaa69474aef7f7eee92f2a833c9327e4418d2a00/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:22:44 np0005533938 podman[104756]: 2025-11-24 18:22:44.327856267 +0000 UTC m=+0.104042038 container init 5a4bdd43126812557beb0f0406bf08cd1c825fc83237268e118e83694d1c65f6 (image=quay.io/ceph/ceph:v18, name=mystifying_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:22:44 np0005533938 podman[104756]: 2025-11-24 18:22:44.333295631 +0000 UTC m=+0.109481372 container start 5a4bdd43126812557beb0f0406bf08cd1c825fc83237268e118e83694d1c65f6 (image=quay.io/ceph/ceph:v18, name=mystifying_meitner, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 24 13:22:44 np0005533938 podman[104756]: 2025-11-24 18:22:44.336308637 +0000 UTC m=+0.112494378 container attach 5a4bdd43126812557beb0f0406bf08cd1c825fc83237268e118e83694d1c65f6 (image=quay.io/ceph/ceph:v18, name=mystifying_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 24 13:22:44 np0005533938 podman[104756]: 2025-11-24 18:22:44.245674421 +0000 UTC m=+0.021860212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:22:44 np0005533938 mystifying_meitner[104771]: could not fetch user info: no user info saved
Nov 24 13:22:44 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v146: 290 pgs: 1 peering, 31 unknown, 258 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:22:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 24 13:22:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 13:22:44 np0005533938 systemd[1]: libpod-5a4bdd43126812557beb0f0406bf08cd1c825fc83237268e118e83694d1c65f6.scope: Deactivated successfully.
Nov 24 13:22:44 np0005533938 conmon[104771]: conmon 5a4bdd43126812557beb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a4bdd43126812557beb0f0406bf08cd1c825fc83237268e118e83694d1c65f6.scope/container/memory.events
Nov 24 13:22:44 np0005533938 podman[104756]: 2025-11-24 18:22:44.530841516 +0000 UTC m=+0.307027317 container died 5a4bdd43126812557beb0f0406bf08cd1c825fc83237268e118e83694d1c65f6 (image=quay.io/ceph/ceph:v18, name=mystifying_meitner, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Nov 24 13:22:44 np0005533938 systemd[1]: var-lib-containers-storage-overlay-6bd176ebf21184e910650fa7eaa69474aef7f7eee92f2a833c9327e4418d2a00-merged.mount: Deactivated successfully.
Nov 24 13:22:44 np0005533938 podman[104756]: 2025-11-24 18:22:44.563584507 +0000 UTC m=+0.339770248 container remove 5a4bdd43126812557beb0f0406bf08cd1c825fc83237268e118e83694d1c65f6 (image=quay.io/ceph/ceph:v18, name=mystifying_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:22:44 np0005533938 systemd[1]: libpod-conmon-5a4bdd43126812557beb0f0406bf08cd1c825fc83237268e118e83694d1c65f6.scope: Deactivated successfully.
Nov 24 13:22:44 np0005533938 ceph-mgr[75218]: [progress INFO root] Writing back 15 completed events
Nov 24 13:22:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 24 13:22:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:22:44 np0005533938 python3[104894]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid e5ee928f-099b-569b-93c9-ecf025cbb50d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:22:44 np0005533938 podman[104895]: 2025-11-24 18:22:44.933876881 +0000 UTC m=+0.050209248 container create 4ab2831b6ba41edec09d634a9a15c4890419b3ff8219b00949ff2c43d0deead9 (image=quay.io/ceph/ceph:v18, name=vigilant_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:22:44 np0005533938 systemd[1]: Started libpod-conmon-4ab2831b6ba41edec09d634a9a15c4890419b3ff8219b00949ff2c43d0deead9.scope.
Nov 24 13:22:44 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:22:44 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e79ca7b5696c2d7f54f08e5ee4fd486472ff93bee4cb53813cc1d4baf83a631e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:22:44 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e79ca7b5696c2d7f54f08e5ee4fd486472ff93bee4cb53813cc1d4baf83a631e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:22:44 np0005533938 podman[104895]: 2025-11-24 18:22:44.995159633 +0000 UTC m=+0.111492010 container init 4ab2831b6ba41edec09d634a9a15c4890419b3ff8219b00949ff2c43d0deead9 (image=quay.io/ceph/ceph:v18, name=vigilant_franklin, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:22:45 np0005533938 podman[104895]: 2025-11-24 18:22:45.000814224 +0000 UTC m=+0.117146631 container start 4ab2831b6ba41edec09d634a9a15c4890419b3ff8219b00949ff2c43d0deead9 (image=quay.io/ceph/ceph:v18, name=vigilant_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:22:45 np0005533938 podman[104895]: 2025-11-24 18:22:45.00419456 +0000 UTC m=+0.120526927 container attach 4ab2831b6ba41edec09d634a9a15c4890419b3ff8219b00949ff2c43d0deead9 (image=quay.io/ceph/ceph:v18, name=vigilant_franklin, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 13:22:45 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 24 13:22:45 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 24 13:22:45 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:22:45 np0005533938 podman[104895]: 2025-11-24 18:22:44.91978032 +0000 UTC m=+0.036112697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 24 13:22:45 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 13:22:45 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 24 13:22:45 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 24 13:22:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 61 pg[11.0( empty local-lis/les=53/54 n=0 ec=53/53 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=9.445177078s) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 134.109207153s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:45 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 61 pg[11.0( empty local-lis/les=53/54 n=0 ec=53/53 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=9.445177078s) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown pruub 134.109207153s@ mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:45 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 24 13:22:45 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 24 13:22:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 24 13:22:46 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 24 13:22:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 24 13:22:46 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.16( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.17( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.15( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.14( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.13( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.2( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.f( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.e( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.d( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.b( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.9( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.c( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.8( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.a( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.3( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.4( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.5( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.6( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.7( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.18( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1a( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1b( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1d( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1e( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1f( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.10( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.11( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1c( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.12( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.19( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.17( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.16( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.15( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.14( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.2( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.13( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.0( empty local-lis/les=61/62 n=0 ec=53/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.9( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.c( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.a( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.3( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.8( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.4( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.5( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.7( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.18( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1a( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.6( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1d( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.11( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.10( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.12( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.1c( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.19( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 62 pg[11.d( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]: {
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:    "user_id": "openstack",
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:    "display_name": "openstack",
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:    "email": "",
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:    "suspended": 0,
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:    "max_buckets": 1000,
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:    "subusers": [],
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:    "keys": [
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:        {
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:            "user": "openstack",
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:            "access_key": "AUTOF8MRD5G1EGMX38JK",
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:            "secret_key": "jApM1ACuLGnfBFuI1u30xQJLvdOWiGTlf0zmyl5B"
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:        }
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:    ],
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:    "swift_keys": [],
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:    "caps": [],
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:    "op_mask": "read, write, delete",
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:    "default_placement": "",
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:    "default_storage_class": "",
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:    "placement_tags": [],
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:    "bucket_quota": {
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:        "enabled": false,
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:        "check_on_raw": false,
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:        "max_size": -1,
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:        "max_size_kb": 0,
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:        "max_objects": -1
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:    },
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:    "user_quota": {
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:        "enabled": false,
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:        "check_on_raw": false,
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:        "max_size": -1,
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:        "max_size_kb": 0,
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:        "max_objects": -1
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:    },
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:    "temp_url_keys": [],
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:    "type": "rgw",
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]:    "mfa_ids": []
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]: }
Nov 24 13:22:46 np0005533938 vigilant_franklin[104908]: 
Nov 24 13:22:46 np0005533938 systemd[1]: libpod-4ab2831b6ba41edec09d634a9a15c4890419b3ff8219b00949ff2c43d0deead9.scope: Deactivated successfully.
Nov 24 13:22:46 np0005533938 podman[104895]: 2025-11-24 18:22:46.174172173 +0000 UTC m=+1.290504540 container died 4ab2831b6ba41edec09d634a9a15c4890419b3ff8219b00949ff2c43d0deead9 (image=quay.io/ceph/ceph:v18, name=vigilant_franklin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 13:22:46 np0005533938 systemd[1]: var-lib-containers-storage-overlay-e79ca7b5696c2d7f54f08e5ee4fd486472ff93bee4cb53813cc1d4baf83a631e-merged.mount: Deactivated successfully.
Nov 24 13:22:46 np0005533938 podman[104895]: 2025-11-24 18:22:46.226974104 +0000 UTC m=+1.343306471 container remove 4ab2831b6ba41edec09d634a9a15c4890419b3ff8219b00949ff2c43d0deead9 (image=quay.io/ceph/ceph:v18, name=vigilant_franklin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 13:22:46 np0005533938 systemd[1]: libpod-conmon-4ab2831b6ba41edec09d634a9a15c4890419b3ff8219b00949ff2c43d0deead9.scope: Deactivated successfully.
Nov 24 13:22:46 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v149: 321 pgs: 1 peering, 62 unknown, 258 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:22:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:22:48 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v150: 321 pgs: 31 unknown, 290 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:22:48 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.7 deep-scrub starts
Nov 24 13:22:48 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.7 deep-scrub ok
Nov 24 13:22:49 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.12 deep-scrub starts
Nov 24 13:22:49 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.12 deep-scrub ok
Nov 24 13:22:49 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 24 13:22:49 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 24 13:22:50 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Nov 24 13:22:50 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Nov 24 13:22:50 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v151: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 314 B/s wr, 2 op/s
Nov 24 13:22:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 13:22:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 13:22:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 13:22:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 13:22:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 24 13:22:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 24 13:22:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 13:22:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 13:22:51 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 24 13:22:51 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 13:22:51 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 13:22:51 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 24 13:22:51 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 13:22:51 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 24 13:22:51 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 13:22:51 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 13:22:51 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 24 13:22:51 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 13:22:51 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.17( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.935743332s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.651275635s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.14( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.868105888s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.583679199s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.15( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.871232033s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.586914062s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.14( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.867993355s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.583679199s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.17( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.935569763s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.651275635s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.892376900s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.608245850s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.15( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.871136665s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.586914062s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.892348289s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.608245850s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.15( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.935056686s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.651275635s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.15( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.935006142s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.651275635s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.899293900s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.615631104s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.899258614s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.615631104s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.14( v 62'1 (0'0,62'1] local-lis/les=61/62 n=1 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.942113876s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=62'1 lcod 0'0 mlcod 0'0 active pruub 141.658615112s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.10( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.870314598s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.586868286s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.11( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.898689270s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.615234375s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.11( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.898637772s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.615234375s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.10( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.870242119s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.586868286s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.14( v 62'1 (0'0,62'1] local-lis/les=61/62 n=1 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.942015648s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=62'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.658615112s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.2( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.941965103s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658615112s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.1( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.941914558s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658630371s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.2( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.941915512s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658615112s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.1( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.941877365s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658630371s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.2( v 48'4 (0'0,48'4] local-lis/les=57/59 n=1 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.870236397s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587051392s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.2( v 48'4 (0'0,48'4] local-lis/les=57/59 n=1 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.870181084s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587051392s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.3( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.898333549s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.615310669s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.3( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.898296356s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.615310669s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.c( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.869590759s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.586975098s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.941417694s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658782959s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.d( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.898124695s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.615554810s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.c( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.869544029s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.586975098s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.941282272s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658782959s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.941436768s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.659042358s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.941395760s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.659042358s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.d( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.869237900s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587051392s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.d( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.869210243s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587051392s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.d( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.898085594s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.615554810s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.d( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.940861702s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658782959s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.d( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.940839767s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658782959s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.e( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.868601799s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587005615s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.940368652s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658782959s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.898828506s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.617248535s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.e( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.868579865s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587005615s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.940345764s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658782959s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.898781776s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.617248535s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.9( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.897096634s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.615768433s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.9( v 62'1 (0'0,62'1] local-lis/les=61/62 n=1 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.940112114s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=62'1 lcod 0'0 mlcod 0'0 active pruub 141.658813477s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.9( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.897046089s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.615768433s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.9( v 62'1 (0'0,62'1] local-lis/les=61/62 n=1 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.940054893s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=62'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.658813477s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.b( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.897900581s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.616790771s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.f( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.868362427s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587203979s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.8( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.939948082s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658874512s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.b( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.897863388s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.616790771s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.f( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.868229866s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587203979s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.8( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.939908981s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658874512s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.b( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.868102074s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587188721s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.3( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.939609528s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658874512s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.9( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.867918015s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587265015s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.3( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.939493179s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658874512s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.9( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.867882729s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587265015s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.4( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.939405441s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658905029s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.1( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.897306442s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.616775513s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.1( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.897242546s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.616775513s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.4( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.939333916s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658905029s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.6( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.867695808s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587280273s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.6( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.867674828s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587280273s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.6( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.939300537s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658966064s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.897101402s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.616744995s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.6( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.939195633s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658966064s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.896965027s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.616744995s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.b( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.867910385s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587188721s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.4( v 48'4 (0'0,48'4] local-lis/les=57/59 n=1 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.867398262s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587493896s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.5( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.896954536s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.617126465s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.5( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.896933556s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.617126465s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.18( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.938714981s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658935547s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.4( v 48'4 (0'0,48'4] local-lis/les=57/59 n=1 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.867290497s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587493896s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.18( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.938659668s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658935547s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.1b( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.866957664s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587326050s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.1a( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.938565254s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658950806s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.1b( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.866939545s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587326050s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.1a( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.938541412s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658950806s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.896711349s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.617385864s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.896691322s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.617385864s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.1c( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.938236237s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.659027100s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.1c( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.938210487s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.659027100s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.1f( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.866488457s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587387085s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.1f( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.866466522s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587387085s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.18( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.866612434s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587463379s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.896478653s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.617431641s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.1e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.937709808s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658981323s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.1e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.937676430s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658981323s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.18( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.866206169s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587463379s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.1d( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.866219521s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587600708s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.1d( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.866194725s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587600708s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.1f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.937556267s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658981323s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.1f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.937532425s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658981323s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.895994186s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.617431641s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.1d( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.895943642s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.617477417s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.1d( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.895924568s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.617477417s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.1c( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.865922928s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587600708s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.10( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.937307358s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.659011841s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.1c( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.865901947s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587600708s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.10( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.937273026s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.659011841s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.12( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.865839958s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587661743s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.12( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.865820885s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587661743s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.897483826s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.619354248s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.897460938s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.619354248s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.11( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.865662575s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587631226s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.12( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.937047958s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.659011841s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.11( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.865646362s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587631226s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.19( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.937030792s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.659042358s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.12( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.936997414s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.659011841s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.19( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.937007904s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.659042358s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.1b( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.895446777s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 139.617538452s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[9.1b( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.895427704s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.617538452s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.11( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.936983109s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658996582s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.1a( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.865384102s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 146.587631226s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[8.1a( v 48'4 (0'0,48'4] local-lis/les=57/59 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63 pruub=15.865368843s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.587631226s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.11( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.936758995s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658996582s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.1b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.936628342s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 141.658966064s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[11.1b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=10.936588287s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.658966064s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[11.10( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[8.10( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.11( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.5( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[8.b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[11.4( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[11.14( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[8.6( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[8.9( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[11.6( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.9( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[8.f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[11.e( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[8.e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[8.c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[11.f( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.1( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[11.1( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.3( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[8.18( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[11.19( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[8.1a( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[11.17( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[8.14( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[8.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[8.1d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[8.15( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.15( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.2( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.3( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[8.2( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.d( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.8( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[8.d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.9( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[8.4( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.18( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[8.1b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.1b( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.1c( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.1e( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[8.12( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.11( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[8.11( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.12( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.b( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.1a( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[11.1f( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[8.1c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.d( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.849390030s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 61'22 mlcod 61'22 active pruub 132.094467163s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.1e( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.849451065s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094573975s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.13( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.849112511s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094223022s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.d( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.849340439s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 61'22 mlcod 0'0 unknown NOTIFY pruub 132.094467163s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[10.d( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[10.b( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[10.12( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.b( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848711014s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.093948364s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.b( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848665237s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.093948364s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[10.1e( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.12( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848638535s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094146729s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.13( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848741531s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094223022s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.1e( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.849034309s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094573975s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.12( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848603249s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094146729s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.10( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848593712s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094329834s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[10.7( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.10( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848567963s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094329834s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.1a( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848437309s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094375610s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.1a( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848406792s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094375610s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.19( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848434448s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094421387s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.19( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848391533s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094421387s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.7( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848324776s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094436646s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.7( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848262787s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094436646s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.6( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848258972s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094467163s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.6( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848228455s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094467163s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.4( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848482132s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094818115s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.8( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848127365s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094497681s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.4( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848447800s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094818115s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.f( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848217010s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094619751s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.8( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848087311s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094497681s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.f( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848190308s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094619751s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.9( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848142624s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 61'22 mlcod 61'22 active pruub 132.094635010s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.9( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848088264s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 61'22 mlcod 0'0 unknown NOTIFY pruub 132.094635010s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.11( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847746849s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094268799s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.e( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848030090s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 61'22 mlcod 61'22 active pruub 132.094711304s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[10.4( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.1( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847915649s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094696045s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.e( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847978592s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 61'22 mlcod 0'0 unknown NOTIFY pruub 132.094711304s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.2( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848220825s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.095169067s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.2( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.848194122s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.095169067s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.1( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847879410s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094696045s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.14( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847599030s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 61'22 mlcod 61'22 active pruub 132.094726562s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.14( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847535133s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 61'22 mlcod 0'0 unknown NOTIFY pruub 132.094726562s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.15( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847527504s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 61'22 mlcod 61'22 active pruub 132.094726562s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[10.8( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.15( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847389221s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 61'22 mlcod 0'0 unknown NOTIFY pruub 132.094726562s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.16( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847386360s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094787598s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.16( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847353935s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094787598s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.17( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847392082s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 132.094818115s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.17( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847299576s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094818115s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[10.9( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 63 pg[10.11( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=8.847588539s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.094268799s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[10.13( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[10.e( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[10.1( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[10.10( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[10.1a( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[10.15( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[10.19( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[10.16( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[10.6( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[10.f( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 63 pg[10.17( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[10.2( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[10.14( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:51 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 63 pg[10.11( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 24 13:22:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 24 13:22:52 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 24 13:22:52 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 13:22:52 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 13:22:52 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 24 13:22:52 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.3( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.3( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.1( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.1( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.9( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.9( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.5( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.11( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.5( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.11( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.1b( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.1b( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.1d( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.1d( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.5( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.5( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.1( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.1( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.b( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.b( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.9( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.9( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.d( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.d( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.3( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.3( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.11( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.11( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:52 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[8.1c( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[10.13( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[10.8( v 52'16 (0'0,52'16] local-lis/les=63/64 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.1f( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[8.11( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.12( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.b( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.11( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.1a( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.1e( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.1c( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[8.12( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.18( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.1b( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[8.4( v 48'4 (0'0,48'4] local-lis/les=63/64 n=1 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[8.1b( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[8.d( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.9( v 62'1 lc 0'0 (0'0,62'1] local-lis/les=63/64 n=1 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=62'1 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.d( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[8.2( v 48'4 (0'0,48'4] local-lis/les=63/64 n=1 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.3( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.8( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[8.15( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.2( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 64 pg[11.15( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[10.10( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[10.11( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[10.6( v 52'16 (0'0,52'16] local-lis/les=63/64 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[10.1a( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[10.19( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[10.b( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[10.f( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[10.9( v 62'23 lc 61'22 (0'0,62'23] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=62'23 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[10.17( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[10.7( v 52'16 (0'0,52'16] local-lis/les=63/64 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[10.4( v 52'16 (0'0,52'16] local-lis/les=63/64 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[10.16( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[10.1e( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[8.1d( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[8.1f( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[10.1( v 52'16 (0'0,52'16] local-lis/les=63/64 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[10.15( v 62'23 lc 61'22 (0'0,62'23] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=62'23 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[10.e( v 62'23 lc 61'22 (0'0,62'23] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=62'23 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[11.17( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[8.14( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[10.d( v 62'23 lc 61'22 (0'0,62'23] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=62'23 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[8.1a( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[8.18( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[11.19( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[11.1( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[11.f( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[8.c( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[8.e( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[11.e( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[11.6( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[8.6( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[11.14( v 62'1 lc 0'0 (0'0,62'1] local-lis/les=63/64 n=1 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=62'1 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[8.9( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[8.b( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[8.10( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[11.4( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[8.f( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 64 pg[11.10( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[10.12( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[10.2( v 52'16 (0'0,52'16] local-lis/les=63/64 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 64 pg[10.14( v 62'23 lc 61'22 (0'0,62'23] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=62'23 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:22:52 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v154: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 316 B/s wr, 2 op/s
Nov 24 13:22:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 24 13:22:52 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 24 13:22:53 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 24 13:22:53 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 24 13:22:53 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 24 13:22:53 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 24 13:22:53 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 24 13:22:53 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 24 13:22:53 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 24 13:22:53 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.11( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:53 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.b( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:53 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:53 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:53 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.d( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:53 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.1d( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:53 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.5( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:53 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:53 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.9( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:53 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:53 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.1b( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:53 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:53 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:53 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:53 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.1( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:53 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 65 pg[9.3( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:54 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.15 deep-scrub starts
Nov 24 13:22:54 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.15 deep-scrub ok
Nov 24 13:22:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 24 13:22:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 24 13:22:54 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 24 13:22:54 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:54 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:54 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.1d( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:54 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.1d( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:54 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:54 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:54 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.d( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:54 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.d( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:54 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:54 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:54 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.b( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:54 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 24 13:22:54 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.b( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:54 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.5( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:54 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.5( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:54 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.11( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:54 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 66 pg[9.11( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.1d( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.580228806s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.361923218s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.580214500s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.361984253s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.1d( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.580147743s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.361923218s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.580163956s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.361984253s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.5( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.580101967s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.361953735s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.579842567s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.361831665s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.b( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.579828262s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.361740112s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.5( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.579917908s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.361953735s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.b( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.579610825s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.361740112s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.11( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.579423904s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.361724854s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.11( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.579375267s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.361724854s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.d( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.579146385s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.361892700s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.578907013s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.361801147s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.d( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.578978539s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.361892700s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.578929901s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.361831665s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 66 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=15.578817368s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.361801147s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:54 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v157: 321 pgs: 5 active+remapped, 2 active+recovery_wait+remapped, 1 active+recovering+remapped, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 20/215 objects misplaced (9.302%); 660 B/s, 8 objects/s recovering
Nov 24 13:22:55 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 24 13:22:55 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 24 13:22:55 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 24 13:22:55 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 24 13:22:55 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 24 13:22:55 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.576583862s) [0] async=[0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.362167358s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:55 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.576522827s) [0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.362167358s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:55 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.3( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.575687408s) [0] async=[0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.361968994s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:55 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.3( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.575637817s) [0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.361968994s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.1b( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.1b( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.3( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.3( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.1( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.1( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.9( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.9( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:22:55 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.9( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.573685646s) [0] async=[0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.361999512s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:55 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.9( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.573624611s) [0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.361999512s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:55 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.572578430s) [0] async=[0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.362060547s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:55 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.572525978s) [0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.362060547s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:55 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.572337151s) [0] async=[0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.362136841s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:55 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.572283745s) [0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.362136841s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:55 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.571805000s) [0] async=[0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.361770630s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:55 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.1b( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.572100639s) [0] async=[0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.362136841s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:55 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.571748734s) [0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.361770630s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:55 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.1b( v 55'385 (0'0,55'385] local-lis/les=64/65 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.572053909s) [0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.362136841s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:55 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.1( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.571900368s) [0] async=[0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 149.362289429s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:22:55 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 67 pg[9.1( v 55'385 (0'0,55'385] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=14.571837425s) [0] r=-1 lpr=67 pi=[59,67)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.362289429s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:22:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.11( v 55'385 (0'0,55'385] local-lis/les=66/67 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=66/67 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=66/67 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.d( v 55'385 (0'0,55'385] local-lis/les=66/67 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=66/67 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.1d( v 55'385 (0'0,55'385] local-lis/les=66/67 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.b( v 55'385 (0'0,55'385] local-lis/les=66/67 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 67 pg[9.5( v 55'385 (0'0,55'385] local-lis/les=66/67 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:56 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.11 deep-scrub starts
Nov 24 13:22:56 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.11 deep-scrub ok
Nov 24 13:22:56 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 24 13:22:56 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 24 13:22:56 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 24 13:22:56 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 68 pg[9.1b( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:56 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 68 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:56 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 68 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:56 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 68 pg[9.1( v 55'385 (0'0,55'385] local-lis/les=67/68 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:56 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 68 pg[9.9( v 55'385 (0'0,55'385] local-lis/les=67/68 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:56 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 68 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=67/68 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:56 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 68 pg[9.3( v 55'385 (0'0,55'385] local-lis/les=67/68 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:56 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 68 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:22:56 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v160: 321 pgs: 5 active+remapped, 2 active+recovery_wait+remapped, 1 active+recovering+remapped, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 20/215 objects misplaced (9.302%); 660 B/s, 8 objects/s recovering
Nov 24 13:22:56 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 24 13:22:56 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 24 13:22:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:22:58 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v161: 321 pgs: 5 active+remapped, 2 active+recovery_wait+remapped, 1 active+recovering+remapped, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 20/215 objects misplaced (9.302%); 492 B/s, 6 objects/s recovering
Nov 24 13:22:58 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 24 13:22:58 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 24 13:23:00 np0005533938 systemd-logind[822]: New session 33 of user zuul.
Nov 24 13:23:00 np0005533938 systemd[1]: Started Session 33 of User zuul.
Nov 24 13:23:00 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v162: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 353 B/s, 13 objects/s recovering
Nov 24 13:23:00 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 24 13:23:00 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 24 13:23:00 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 24 13:23:00 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 24 13:23:01 np0005533938 python3.9[105162]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:23:01 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 24 13:23:01 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 24 13:23:01 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 24 13:23:01 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 24 13:23:01 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 24 13:23:02 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 24 13:23:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:23:02 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v164: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 185 B/s, 8 objects/s recovering
Nov 24 13:23:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 24 13:23:02 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 24 13:23:02 np0005533938 python3.9[105380]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:23:02 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 24 13:23:02 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 24 13:23:03 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Nov 24 13:23:03 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Nov 24 13:23:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 24 13:23:03 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 24 13:23:03 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 24 13:23:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 24 13:23:03 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 24 13:23:04 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 24 13:23:04 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v166: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 170 B/s, 7 objects/s recovering
Nov 24 13:23:04 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 24 13:23:04 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 24 13:23:04 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 24 13:23:04 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 24 13:23:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:23:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:23:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:23:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:23:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:23:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:23:04 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 24 13:23:04 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 24 13:23:05 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 24 13:23:05 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 24 13:23:05 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 24 13:23:05 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 24 13:23:05 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 24 13:23:05 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 24 13:23:05 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 24 13:23:05 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 24 13:23:05 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 24 13:23:06 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 24 13:23:06 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 24 13:23:06 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 24 13:23:06 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v168: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:23:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 24 13:23:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 24 13:23:06 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 24 13:23:06 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 24 13:23:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 24 13:23:07 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 24 13:23:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 24 13:23:07 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 24 13:23:07 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 24 13:23:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:23:07 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Nov 24 13:23:07 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Nov 24 13:23:08 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 24 13:23:08 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 72 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=15.536156654s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 163.615783691s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:08 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 72 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=15.536096573s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.615783691s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:08 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 72 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=15.536886215s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 163.617172241s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:08 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 72 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=15.536849022s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617172241s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:08 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 72 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=15.537218094s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 163.617813110s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:08 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 72 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=15.537192345s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617813110s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:08 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 72 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=15.536213875s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 163.617156982s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:08 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 72 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=15.536179543s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617156982s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:08 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 72 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:08 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 72 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:08 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 72 pg[9.6( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:08 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 72 pg[9.e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:08 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v170: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:23:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 24 13:23:08 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 24 13:23:08 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 24 13:23:08 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 24 13:23:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 24 13:23:09 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 24 13:23:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:23:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:23:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:23:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:23:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:23:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 24 13:23:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 24 13:23:09 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 24 13:23:09 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 73 pg[9.e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:09 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 73 pg[9.e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:09 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 73 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:09 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 73 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:09 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 73 pg[9.6( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:09 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 73 pg[9.6( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:09 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 73 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:09 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 73 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:23:09 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 974ffd96-c3eb-4cd8-8569-86d4a7a02be5 does not exist
Nov 24 13:23:09 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 33a606b8-2bdc-4762-97dd-08a92f9a95c6 does not exist
Nov 24 13:23:09 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 2202143b-209b-44f3-ad13-1b97ca2be463 does not exist
Nov 24 13:23:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:23:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:23:09 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 73 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:09 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 73 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:09 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 73 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:09 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 73 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:09 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 73 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:09 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 73 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:09 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 73 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:09 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 73 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:23:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:23:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:23:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:23:09 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 73 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=73 pruub=9.535032272s) [2] r=-1 lpr=73 pi=[66,73)/1 crt=55'385 mlcod 0'0 active pruub 165.258880615s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:09 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 73 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=73 pruub=9.534981728s) [2] r=-1 lpr=73 pi=[66,73)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 165.258880615s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:09 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 73 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=66/67 n=5 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=73 pruub=9.528047562s) [2] r=-1 lpr=73 pi=[66,73)/1 crt=55'385 mlcod 0'0 active pruub 165.251983643s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:09 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 73 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=66/67 n=5 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=73 pruub=9.527892113s) [2] r=-1 lpr=73 pi=[66,73)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 165.251983643s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:09 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 73 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=67/68 n=6 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=73 pruub=10.544498444s) [2] r=-1 lpr=73 pi=[67,73)/1 crt=55'385 mlcod 0'0 active pruub 166.268676758s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:09 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 73 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=67/68 n=6 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=73 pruub=10.544478416s) [2] r=-1 lpr=73 pi=[67,73)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 166.268676758s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:09 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 73 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=66/67 n=5 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=73 pruub=9.534360886s) [2] r=-1 lpr=73 pi=[66,73)/1 crt=55'385 mlcod 0'0 active pruub 165.258819580s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:09 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 73 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=66/67 n=5 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=73 pruub=9.534062386s) [2] r=-1 lpr=73 pi=[66,73)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 165.258819580s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:09 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 73 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=73) [2] r=0 lpr=73 pi=[66,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:09 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 73 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=73) [2] r=0 lpr=73 pi=[67,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:09 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 73 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=73) [2] r=0 lpr=73 pi=[66,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:09 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 73 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=73) [2] r=0 lpr=73 pi=[66,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:10 np0005533938 podman[105708]: 2025-11-24 18:23:10.020918845 +0000 UTC m=+0.042679924 container create 0e180a0e3241d45881739baaf2f5c62b460466563ea2dc18c751343d80c20eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 13:23:10 np0005533938 systemd[1]: session-33.scope: Deactivated successfully.
Nov 24 13:23:10 np0005533938 systemd[1]: session-33.scope: Consumed 8.306s CPU time.
Nov 24 13:23:10 np0005533938 systemd-logind[822]: Session 33 logged out. Waiting for processes to exit.
Nov 24 13:23:10 np0005533938 systemd-logind[822]: Removed session 33.
Nov 24 13:23:10 np0005533938 systemd[1]: Started libpod-conmon-0e180a0e3241d45881739baaf2f5c62b460466563ea2dc18c751343d80c20eb7.scope.
Nov 24 13:23:10 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:23:10 np0005533938 podman[105708]: 2025-11-24 18:23:09.999421884 +0000 UTC m=+0.021183013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:23:10 np0005533938 podman[105708]: 2025-11-24 18:23:10.101922597 +0000 UTC m=+0.123683676 container init 0e180a0e3241d45881739baaf2f5c62b460466563ea2dc18c751343d80c20eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:23:10 np0005533938 podman[105708]: 2025-11-24 18:23:10.10764365 +0000 UTC m=+0.129404739 container start 0e180a0e3241d45881739baaf2f5c62b460466563ea2dc18c751343d80c20eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hamilton, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:23:10 np0005533938 podman[105708]: 2025-11-24 18:23:10.111306204 +0000 UTC m=+0.133067283 container attach 0e180a0e3241d45881739baaf2f5c62b460466563ea2dc18c751343d80c20eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hamilton, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 13:23:10 np0005533938 cool_hamilton[105724]: 167 167
Nov 24 13:23:10 np0005533938 systemd[1]: libpod-0e180a0e3241d45881739baaf2f5c62b460466563ea2dc18c751343d80c20eb7.scope: Deactivated successfully.
Nov 24 13:23:10 np0005533938 conmon[105724]: conmon 0e180a0e3241d4588173 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0e180a0e3241d45881739baaf2f5c62b460466563ea2dc18c751343d80c20eb7.scope/container/memory.events
Nov 24 13:23:10 np0005533938 podman[105708]: 2025-11-24 18:23:10.113400143 +0000 UTC m=+0.135161262 container died 0e180a0e3241d45881739baaf2f5c62b460466563ea2dc18c751343d80c20eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hamilton, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 13:23:10 np0005533938 systemd[1]: var-lib-containers-storage-overlay-55239eee08468c653f8eb9257bb421397ad86f3384ec3b6aa6afe50ad13a0912-merged.mount: Deactivated successfully.
Nov 24 13:23:10 np0005533938 podman[105708]: 2025-11-24 18:23:10.157164757 +0000 UTC m=+0.178925856 container remove 0e180a0e3241d45881739baaf2f5c62b460466563ea2dc18c751343d80c20eb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hamilton, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 13:23:10 np0005533938 systemd[1]: libpod-conmon-0e180a0e3241d45881739baaf2f5c62b460466563ea2dc18c751343d80c20eb7.scope: Deactivated successfully.
Nov 24 13:23:10 np0005533938 podman[105748]: 2025-11-24 18:23:10.348817675 +0000 UTC m=+0.050887298 container create 53cd5049445d7396daeb9157508ead3198e855177edf74e03112e9b6dd50cb75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sanderson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 24 13:23:10 np0005533938 systemd[1]: Started libpod-conmon-53cd5049445d7396daeb9157508ead3198e855177edf74e03112e9b6dd50cb75.scope.
Nov 24 13:23:10 np0005533938 podman[105748]: 2025-11-24 18:23:10.319058329 +0000 UTC m=+0.021127952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:23:10 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:23:10 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:23:10 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 24 13:23:10 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:23:10 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:23:10 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfcbefd78237e9028b78a198f93e1886aa87c6c1810b6ffc8f4f2e2d91d3b92a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:23:10 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfcbefd78237e9028b78a198f93e1886aa87c6c1810b6ffc8f4f2e2d91d3b92a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:23:10 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfcbefd78237e9028b78a198f93e1886aa87c6c1810b6ffc8f4f2e2d91d3b92a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:23:10 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfcbefd78237e9028b78a198f93e1886aa87c6c1810b6ffc8f4f2e2d91d3b92a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:23:10 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfcbefd78237e9028b78a198f93e1886aa87c6c1810b6ffc8f4f2e2d91d3b92a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:23:10 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 24 13:23:10 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 24 13:23:10 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 24 13:23:10 np0005533938 podman[105748]: 2025-11-24 18:23:10.465438199 +0000 UTC m=+0.167507812 container init 53cd5049445d7396daeb9157508ead3198e855177edf74e03112e9b6dd50cb75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sanderson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:23:10 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 74 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=66/67 n=5 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=0 lpr=74 pi=[66,74)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:10 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 74 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=66/67 n=5 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=0 lpr=74 pi=[66,74)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:10 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 74 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=67/68 n=6 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=74) [2]/[0] r=0 lpr=74 pi=[67,74)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:10 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 74 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=67/68 n=6 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=74) [2]/[0] r=0 lpr=74 pi=[67,74)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:10 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 74 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=66/67 n=5 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=0 lpr=74 pi=[66,74)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:10 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 74 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=66/67 n=5 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=0 lpr=74 pi=[66,74)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:10 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 74 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=0 lpr=74 pi=[66,74)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:10 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 74 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=0 lpr=74 pi=[66,74)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:10 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 74 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[66,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:10 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 74 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[66,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:10 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 74 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[66,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:10 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 74 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[67,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:10 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 74 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[66,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:10 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 74 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[66,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:10 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 74 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[66,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:10 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 74 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=74) [2]/[0] r=-1 lpr=74 pi=[67,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:10 np0005533938 podman[105748]: 2025-11-24 18:23:10.476994788 +0000 UTC m=+0.179064381 container start 53cd5049445d7396daeb9157508ead3198e855177edf74e03112e9b6dd50cb75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sanderson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 13:23:10 np0005533938 podman[105748]: 2025-11-24 18:23:10.483914944 +0000 UTC m=+0.185984557 container attach 53cd5049445d7396daeb9157508ead3198e855177edf74e03112e9b6dd50cb75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sanderson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 13:23:10 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 74 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:10 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 74 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:10 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 74 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:10 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 74 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:10 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v173: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:23:10 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 24 13:23:10 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 24 13:23:10 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 24 13:23:10 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 24 13:23:11 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 24 13:23:11 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 24 13:23:11 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 24 13:23:11 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 24 13:23:11 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 24 13:23:11 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 24 13:23:11 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 24 13:23:11 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 75 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:11 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 75 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:11 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 75 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:11 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 75 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:11 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 75 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:11 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 75 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:11 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 75 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:11 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 75 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:11 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994512558s) [2] async=[2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 166.084869385s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:11 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994291306s) [2] async=[2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 166.084701538s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:11 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994224548s) [2] async=[2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 166.084793091s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:11 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994330406s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084869385s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:11 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994112968s) [2] async=[2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 166.084762573s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:11 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994158745s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084793091s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:11 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994071960s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084762573s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:11 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.993534088s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084701538s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:11 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525858879s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 163.617202759s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:11 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525828362s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617202759s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:11 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:11 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525444031s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 163.617691040s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:11 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525407791s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617691040s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:11 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:11 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 75 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[67,74)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:11 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 75 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[66,74)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:11 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 75 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[66,74)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:11 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 75 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[66,74)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:11 np0005533938 gallant_sanderson[105764]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:23:11 np0005533938 gallant_sanderson[105764]: --> relative data size: 1.0
Nov 24 13:23:11 np0005533938 gallant_sanderson[105764]: --> All data devices are unavailable
Nov 24 13:23:11 np0005533938 systemd[1]: libpod-53cd5049445d7396daeb9157508ead3198e855177edf74e03112e9b6dd50cb75.scope: Deactivated successfully.
Nov 24 13:23:11 np0005533938 systemd[1]: libpod-53cd5049445d7396daeb9157508ead3198e855177edf74e03112e9b6dd50cb75.scope: Consumed 1.011s CPU time.
Nov 24 13:23:11 np0005533938 podman[105748]: 2025-11-24 18:23:11.59133795 +0000 UTC m=+1.293407563 container died 53cd5049445d7396daeb9157508ead3198e855177edf74e03112e9b6dd50cb75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sanderson, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:23:11 np0005533938 systemd[1]: var-lib-containers-storage-overlay-dfcbefd78237e9028b78a198f93e1886aa87c6c1810b6ffc8f4f2e2d91d3b92a-merged.mount: Deactivated successfully.
Nov 24 13:23:11 np0005533938 podman[105748]: 2025-11-24 18:23:11.652228841 +0000 UTC m=+1.354298474 container remove 53cd5049445d7396daeb9157508ead3198e855177edf74e03112e9b6dd50cb75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sanderson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 24 13:23:11 np0005533938 systemd[1]: libpod-conmon-53cd5049445d7396daeb9157508ead3198e855177edf74e03112e9b6dd50cb75.scope: Deactivated successfully.
Nov 24 13:23:11 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 24 13:23:11 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 24 13:23:12 np0005533938 podman[105947]: 2025-11-24 18:23:12.252563623 +0000 UTC m=+0.040967886 container create c97f4479c956ade8b6a023296f68ea081f4ec856dd21e4ce92939f25132f8cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:23:12 np0005533938 systemd[1]: Started libpod-conmon-c97f4479c956ade8b6a023296f68ea081f4ec856dd21e4ce92939f25132f8cae.scope.
Nov 24 13:23:12 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:23:12 np0005533938 podman[105947]: 2025-11-24 18:23:12.237210676 +0000 UTC m=+0.025614959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:23:12 np0005533938 podman[105947]: 2025-11-24 18:23:12.339305298 +0000 UTC m=+0.127709571 container init c97f4479c956ade8b6a023296f68ea081f4ec856dd21e4ce92939f25132f8cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 13:23:12 np0005533938 podman[105947]: 2025-11-24 18:23:12.349307712 +0000 UTC m=+0.137711985 container start c97f4479c956ade8b6a023296f68ea081f4ec856dd21e4ce92939f25132f8cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_yalow, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:23:12 np0005533938 podman[105947]: 2025-11-24 18:23:12.353226654 +0000 UTC m=+0.141630957 container attach c97f4479c956ade8b6a023296f68ea081f4ec856dd21e4ce92939f25132f8cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_yalow, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:23:12 np0005533938 exciting_yalow[105963]: 167 167
Nov 24 13:23:12 np0005533938 systemd[1]: libpod-c97f4479c956ade8b6a023296f68ea081f4ec856dd21e4ce92939f25132f8cae.scope: Deactivated successfully.
Nov 24 13:23:12 np0005533938 conmon[105963]: conmon c97f4479c956ade8b6a0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c97f4479c956ade8b6a023296f68ea081f4ec856dd21e4ce92939f25132f8cae.scope/container/memory.events
Nov 24 13:23:12 np0005533938 podman[105947]: 2025-11-24 18:23:12.356812046 +0000 UTC m=+0.145216309 container died c97f4479c956ade8b6a023296f68ea081f4ec856dd21e4ce92939f25132f8cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 13:23:12 np0005533938 systemd[1]: var-lib-containers-storage-overlay-29cc692344ee60d1a39561138a61074344613325ec7c026efb79d7045cea2e03-merged.mount: Deactivated successfully.
Nov 24 13:23:12 np0005533938 podman[105947]: 2025-11-24 18:23:12.402668619 +0000 UTC m=+0.191072882 container remove c97f4479c956ade8b6a023296f68ea081f4ec856dd21e4ce92939f25132f8cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 13:23:12 np0005533938 systemd[1]: libpod-conmon-c97f4479c956ade8b6a023296f68ea081f4ec856dd21e4ce92939f25132f8cae.scope: Deactivated successfully.
Nov 24 13:23:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 24 13:23:12 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 24 13:23:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 24 13:23:12 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 24 13:23:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:23:12 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.8( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[59,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:12 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.8( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[59,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:12 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=0 lpr=76 pi=[66,76)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:12 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=0 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:12 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.18( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[59,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:12 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.18( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[59,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:12 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76) [2] r=0 lpr=76 pi=[67,76)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:12 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76) [2] r=0 lpr=76 pi=[67,76)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:12 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=0 lpr=76 pi=[66,76)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:12 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=0 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:12 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.997705460s) [2] async=[2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 55'385 active pruub 173.554550171s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:12 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.997131348s) [2] async=[2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 55'385 active pruub 173.554504395s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:12 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.997094154s) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554504395s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:12 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.997041702s) [2] async=[2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 55'385 active pruub 173.554534912s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:12 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.997647285s) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554550171s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:12 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76 pruub=14.996972084s) [2] r=-1 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554534912s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:12 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76 pruub=14.996159554s) [2] async=[2] r=-1 lpr=76 pi=[67,76)/1 crt=55'385 mlcod 55'385 active pruub 173.554031372s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:12 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=0 lpr=76 pi=[66,76)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:12 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=0 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:12 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 76 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76 pruub=14.996058464s) [2] r=-1 lpr=76 pi=[67,76)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 173.554031372s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:12 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:12 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:12 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:12 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:12 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:12 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=75/76 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:12 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:12 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 76 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:12 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v176: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:23:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 24 13:23:12 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 24 13:23:12 np0005533938 podman[105988]: 2025-11-24 18:23:12.55013205 +0000 UTC m=+0.044664610 container create 2a5e28bd13ec7c474a92501aa4c1c824335a44d043a661002acd7716ba8cb6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wu, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 13:23:12 np0005533938 systemd[1]: Started libpod-conmon-2a5e28bd13ec7c474a92501aa4c1c824335a44d043a661002acd7716ba8cb6b4.scope.
Nov 24 13:23:12 np0005533938 podman[105988]: 2025-11-24 18:23:12.53148748 +0000 UTC m=+0.026020040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:23:12 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:23:12 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e7926b0f807469338b5a3f494d5cb565b5cdfb602f342441744fd20d416658b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:23:12 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e7926b0f807469338b5a3f494d5cb565b5cdfb602f342441744fd20d416658b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:23:12 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e7926b0f807469338b5a3f494d5cb565b5cdfb602f342441744fd20d416658b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:23:12 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e7926b0f807469338b5a3f494d5cb565b5cdfb602f342441744fd20d416658b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:23:12 np0005533938 podman[105988]: 2025-11-24 18:23:12.648231458 +0000 UTC m=+0.142764028 container init 2a5e28bd13ec7c474a92501aa4c1c824335a44d043a661002acd7716ba8cb6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 13:23:12 np0005533938 podman[105988]: 2025-11-24 18:23:12.656925386 +0000 UTC m=+0.151457926 container start 2a5e28bd13ec7c474a92501aa4c1c824335a44d043a661002acd7716ba8cb6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:23:12 np0005533938 podman[105988]: 2025-11-24 18:23:12.659839468 +0000 UTC m=+0.154372028 container attach 2a5e28bd13ec7c474a92501aa4c1c824335a44d043a661002acd7716ba8cb6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:23:13 np0005533938 elegant_wu[106005]: {
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:    "0": [
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:        {
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "devices": [
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "/dev/loop3"
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            ],
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "lv_name": "ceph_lv0",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "lv_size": "21470642176",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "name": "ceph_lv0",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "tags": {
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.cluster_name": "ceph",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.crush_device_class": "",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.encrypted": "0",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.osd_id": "0",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.type": "block",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.vdo": "0"
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            },
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "type": "block",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "vg_name": "ceph_vg0"
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:        }
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:    ],
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:    "1": [
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:        {
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "devices": [
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "/dev/loop4"
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            ],
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "lv_name": "ceph_lv1",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "lv_size": "21470642176",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "name": "ceph_lv1",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "tags": {
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.cluster_name": "ceph",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.crush_device_class": "",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.encrypted": "0",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.osd_id": "1",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.type": "block",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.vdo": "0"
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            },
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "type": "block",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "vg_name": "ceph_vg1"
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:        }
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:    ],
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:    "2": [
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:        {
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "devices": [
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "/dev/loop5"
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            ],
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "lv_name": "ceph_lv2",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "lv_size": "21470642176",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "name": "ceph_lv2",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "tags": {
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.cluster_name": "ceph",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.crush_device_class": "",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.encrypted": "0",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.osd_id": "2",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.type": "block",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:                "ceph.vdo": "0"
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            },
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "type": "block",
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:            "vg_name": "ceph_vg2"
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:        }
Nov 24 13:23:13 np0005533938 elegant_wu[106005]:    ]
Nov 24 13:23:13 np0005533938 elegant_wu[106005]: }
Nov 24 13:23:13 np0005533938 systemd[1]: libpod-2a5e28bd13ec7c474a92501aa4c1c824335a44d043a661002acd7716ba8cb6b4.scope: Deactivated successfully.
Nov 24 13:23:13 np0005533938 podman[105988]: 2025-11-24 18:23:13.389546498 +0000 UTC m=+0.884079058 container died 2a5e28bd13ec7c474a92501aa4c1c824335a44d043a661002acd7716ba8cb6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wu, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 13:23:13 np0005533938 systemd[1]: var-lib-containers-storage-overlay-2e7926b0f807469338b5a3f494d5cb565b5cdfb602f342441744fd20d416658b-merged.mount: Deactivated successfully.
Nov 24 13:23:13 np0005533938 podman[105988]: 2025-11-24 18:23:13.463914922 +0000 UTC m=+0.958447462 container remove 2a5e28bd13ec7c474a92501aa4c1c824335a44d043a661002acd7716ba8cb6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wu, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:23:13 np0005533938 systemd[1]: libpod-conmon-2a5e28bd13ec7c474a92501aa4c1c824335a44d043a661002acd7716ba8cb6b4.scope: Deactivated successfully.
Nov 24 13:23:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 24 13:23:13 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 24 13:23:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 24 13:23:13 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 24 13:23:13 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 24 13:23:13 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:13 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:13 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 77 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=0 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:13 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 77 pg[9.17( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=0 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:13 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 77 pg[9.7( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=74/67 les/c/f=75/68/0 sis=76) [2] r=0 lpr=76 pi=[67,76)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:13 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 77 pg[9.f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=74/66 les/c/f=75/67/0 sis=76) [2] r=0 lpr=76 pi=[66,76)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:13 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.2 deep-scrub starts
Nov 24 13:23:13 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.2 deep-scrub ok
Nov 24 13:23:14 np0005533938 podman[106166]: 2025-11-24 18:23:14.003248821 +0000 UTC m=+0.033095651 container create 07a7aee6d2f95b2eccf997a142aca9b11b304c2adb7f7d7e99584651891629fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cannon, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:23:14 np0005533938 systemd[1]: Started libpod-conmon-07a7aee6d2f95b2eccf997a142aca9b11b304c2adb7f7d7e99584651891629fb.scope.
Nov 24 13:23:14 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:23:14 np0005533938 podman[106166]: 2025-11-24 18:23:14.067121187 +0000 UTC m=+0.096968037 container init 07a7aee6d2f95b2eccf997a142aca9b11b304c2adb7f7d7e99584651891629fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:23:14 np0005533938 podman[106166]: 2025-11-24 18:23:14.073029985 +0000 UTC m=+0.102876805 container start 07a7aee6d2f95b2eccf997a142aca9b11b304c2adb7f7d7e99584651891629fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cannon, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 13:23:14 np0005533938 podman[106166]: 2025-11-24 18:23:14.07604271 +0000 UTC m=+0.105889560 container attach 07a7aee6d2f95b2eccf997a142aca9b11b304c2adb7f7d7e99584651891629fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cannon, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:23:14 np0005533938 intelligent_cannon[106182]: 167 167
Nov 24 13:23:14 np0005533938 systemd[1]: libpod-07a7aee6d2f95b2eccf997a142aca9b11b304c2adb7f7d7e99584651891629fb.scope: Deactivated successfully.
Nov 24 13:23:14 np0005533938 podman[106166]: 2025-11-24 18:23:14.077387878 +0000 UTC m=+0.107234708 container died 07a7aee6d2f95b2eccf997a142aca9b11b304c2adb7f7d7e99584651891629fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cannon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:23:14 np0005533938 podman[106166]: 2025-11-24 18:23:13.990135298 +0000 UTC m=+0.019982148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:23:14 np0005533938 systemd[1]: var-lib-containers-storage-overlay-162718e8c0ea29864fdb2c418d50c4229f1fef2670303e08c21c434658d1df3d-merged.mount: Deactivated successfully.
Nov 24 13:23:14 np0005533938 podman[106166]: 2025-11-24 18:23:14.11579582 +0000 UTC m=+0.145642650 container remove 07a7aee6d2f95b2eccf997a142aca9b11b304c2adb7f7d7e99584651891629fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cannon, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 13:23:14 np0005533938 systemd[1]: libpod-conmon-07a7aee6d2f95b2eccf997a142aca9b11b304c2adb7f7d7e99584651891629fb.scope: Deactivated successfully.
Nov 24 13:23:14 np0005533938 podman[106205]: 2025-11-24 18:23:14.26920722 +0000 UTC m=+0.054802088 container create a05349e3d974411bece9097075365118b3de089803718567dcc1c80a5197b943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:23:14 np0005533938 systemd[1]: Started libpod-conmon-a05349e3d974411bece9097075365118b3de089803718567dcc1c80a5197b943.scope.
Nov 24 13:23:14 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:23:14 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360b6198dd933be1f37ae597274bae1b31bccce54ff6244f3a9cedcfb1705d5d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:23:14 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360b6198dd933be1f37ae597274bae1b31bccce54ff6244f3a9cedcfb1705d5d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:23:14 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360b6198dd933be1f37ae597274bae1b31bccce54ff6244f3a9cedcfb1705d5d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:23:14 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/360b6198dd933be1f37ae597274bae1b31bccce54ff6244f3a9cedcfb1705d5d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:23:14 np0005533938 podman[106205]: 2025-11-24 18:23:14.243692465 +0000 UTC m=+0.029287423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:23:14 np0005533938 podman[106205]: 2025-11-24 18:23:14.344792279 +0000 UTC m=+0.130387187 container init a05349e3d974411bece9097075365118b3de089803718567dcc1c80a5197b943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sanderson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:23:14 np0005533938 podman[106205]: 2025-11-24 18:23:14.353664201 +0000 UTC m=+0.139259119 container start a05349e3d974411bece9097075365118b3de089803718567dcc1c80a5197b943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sanderson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:23:14 np0005533938 podman[106205]: 2025-11-24 18:23:14.359085245 +0000 UTC m=+0.144680123 container attach a05349e3d974411bece9097075365118b3de089803718567dcc1c80a5197b943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:23:14 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 24 13:23:14 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v178: 321 pgs: 2 active+remapped, 4 peering, 315 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 244 B/s, 12 objects/s recovering
Nov 24 13:23:14 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 24 13:23:14 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 24 13:23:14 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 24 13:23:14 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.984064102s) [2] async=[2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 169.117111206s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:14 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983811378s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117111206s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:14 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983452797s) [2] async=[2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 169.117080688s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:14 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983373642s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117080688s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:14 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:14 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:14 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:14 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:15 np0005533938 zealous_sanderson[106223]: {
Nov 24 13:23:15 np0005533938 zealous_sanderson[106223]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:23:15 np0005533938 zealous_sanderson[106223]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:23:15 np0005533938 zealous_sanderson[106223]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:23:15 np0005533938 zealous_sanderson[106223]:        "osd_id": 0,
Nov 24 13:23:15 np0005533938 zealous_sanderson[106223]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:23:15 np0005533938 zealous_sanderson[106223]:        "type": "bluestore"
Nov 24 13:23:15 np0005533938 zealous_sanderson[106223]:    },
Nov 24 13:23:15 np0005533938 zealous_sanderson[106223]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:23:15 np0005533938 zealous_sanderson[106223]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:23:15 np0005533938 zealous_sanderson[106223]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:23:15 np0005533938 zealous_sanderson[106223]:        "osd_id": 1,
Nov 24 13:23:15 np0005533938 zealous_sanderson[106223]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:23:15 np0005533938 zealous_sanderson[106223]:        "type": "bluestore"
Nov 24 13:23:15 np0005533938 zealous_sanderson[106223]:    },
Nov 24 13:23:15 np0005533938 zealous_sanderson[106223]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:23:15 np0005533938 zealous_sanderson[106223]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:23:15 np0005533938 zealous_sanderson[106223]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:23:15 np0005533938 zealous_sanderson[106223]:        "osd_id": 2,
Nov 24 13:23:15 np0005533938 zealous_sanderson[106223]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:23:15 np0005533938 zealous_sanderson[106223]:        "type": "bluestore"
Nov 24 13:23:15 np0005533938 zealous_sanderson[106223]:    }
Nov 24 13:23:15 np0005533938 zealous_sanderson[106223]: }
Nov 24 13:23:15 np0005533938 systemd[1]: libpod-a05349e3d974411bece9097075365118b3de089803718567dcc1c80a5197b943.scope: Deactivated successfully.
Nov 24 13:23:15 np0005533938 systemd[1]: libpod-a05349e3d974411bece9097075365118b3de089803718567dcc1c80a5197b943.scope: Consumed 1.019s CPU time.
Nov 24 13:23:15 np0005533938 podman[106205]: 2025-11-24 18:23:15.371013066 +0000 UTC m=+1.156607984 container died a05349e3d974411bece9097075365118b3de089803718567dcc1c80a5197b943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sanderson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:23:15 np0005533938 systemd[1]: var-lib-containers-storage-overlay-360b6198dd933be1f37ae597274bae1b31bccce54ff6244f3a9cedcfb1705d5d-merged.mount: Deactivated successfully.
Nov 24 13:23:15 np0005533938 podman[106205]: 2025-11-24 18:23:15.434383618 +0000 UTC m=+1.219978496 container remove a05349e3d974411bece9097075365118b3de089803718567dcc1c80a5197b943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sanderson, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 13:23:15 np0005533938 systemd[1]: libpod-conmon-a05349e3d974411bece9097075365118b3de089803718567dcc1c80a5197b943.scope: Deactivated successfully.
Nov 24 13:23:15 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:23:15 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:23:15 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:23:15 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:23:15 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 3d17801c-bfd8-4722-84a9-679708a590ba does not exist
Nov 24 13:23:15 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 4b43a132-a143-4918-b9f0-1bc6124d620f does not exist
Nov 24 13:23:15 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 24 13:23:15 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 24 13:23:15 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 24 13:23:15 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:23:15 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:23:15 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=78/79 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:15 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=78/79 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:15 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 24 13:23:15 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 24 13:23:15 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 24 13:23:16 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 24 13:23:16 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 24 13:23:16 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 24 13:23:16 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v181: 321 pgs: 2 active+remapped, 4 peering, 315 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 246 B/s, 12 objects/s recovering
Nov 24 13:23:16 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 24 13:23:16 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 24 13:23:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:23:17 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 24 13:23:17 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 24 13:23:18 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v182: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 164 B/s, 8 objects/s recovering
Nov 24 13:23:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 24 13:23:18 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 24 13:23:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 24 13:23:18 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 24 13:23:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 24 13:23:18 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 24 13:23:18 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 24 13:23:19 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 24 13:23:19 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 24 13:23:19 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 24 13:23:20 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 24 13:23:20 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 24 13:23:20 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v184: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:23:20 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 24 13:23:20 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 24 13:23:20 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 24 13:23:20 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 24 13:23:20 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 24 13:23:20 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 24 13:23:20 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 24 13:23:21 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 24 13:23:21 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 24 13:23:21 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 24 13:23:21 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.16 deep-scrub starts
Nov 24 13:23:21 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.16 deep-scrub ok
Nov 24 13:23:22 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 24 13:23:22 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 24 13:23:22 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 24 13:23:22 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 24 13:23:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:23:22 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v186: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:23:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 24 13:23:22 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 24 13:23:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 24 13:23:22 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 24 13:23:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 24 13:23:22 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 24 13:23:22 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 24 13:23:23 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Nov 24 13:23:23 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Nov 24 13:23:23 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 24 13:23:23 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 24 13:23:23 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 24 13:23:24 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.4 deep-scrub starts
Nov 24 13:23:24 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.4 deep-scrub ok
Nov 24 13:23:24 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v188: 321 pgs: 1 active+clean+scrubbing+deep, 320 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:23:24 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 24 13:23:24 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 24 13:23:24 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.14 deep-scrub starts
Nov 24 13:23:24 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 5.14 deep-scrub ok
Nov 24 13:23:24 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 24 13:23:24 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 24 13:23:24 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 24 13:23:24 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 24 13:23:24 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 24 13:23:24 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.120257378s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 179.617523193s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:24 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.120175362s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617523193s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:24 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.119441986s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 179.617889404s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:24 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.119354248s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617889404s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:24 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:24 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:24 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Nov 24 13:23:24 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Nov 24 13:23:25 np0005533938 systemd-logind[822]: New session 34 of user zuul.
Nov 24 13:23:25 np0005533938 systemd[1]: Started Session 34 of User zuul.
Nov 24 13:23:25 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 24 13:23:25 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 24 13:23:25 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 24 13:23:25 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 24 13:23:25 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:25 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:25 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:25 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:25 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:25 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 24 13:23:25 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:25 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:25 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:25 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 24 13:23:26 np0005533938 python3.9[106473]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 24 13:23:26 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v191: 321 pgs: 1 active+clean+scrubbing+deep, 320 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:23:26 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 24 13:23:26 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 24 13:23:26 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 24 13:23:26 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 24 13:23:26 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 24 13:23:26 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 24 13:23:26 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 24 13:23:26 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 24 13:23:26 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 24 13:23:27 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:27 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:27 np0005533938 python3.9[106647]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:23:27 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.b deep-scrub starts
Nov 24 13:23:27 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.b deep-scrub ok
Nov 24 13:23:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:23:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 24 13:23:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 24 13:23:27 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 24 13:23:27 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439636230s) [2] async=[2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 182.619369507s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:27 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439560890s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619369507s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:27 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439293861s) [2] async=[2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 182.619827271s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:27 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439209938s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619827271s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:27 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:27 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:27 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:27 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:27 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 24 13:23:28 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 24 13:23:28 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 24 13:23:28 np0005533938 python3.9[106803]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:23:28 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v194: 321 pgs: 2 peering, 1 active+clean+scrubbing+deep, 318 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Nov 24 13:23:28 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 24 13:23:28 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 24 13:23:28 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 24 13:23:28 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:28 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:28 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Nov 24 13:23:28 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Nov 24 13:23:29 np0005533938 python3.9[106956]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:23:29 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 24 13:23:29 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 24 13:23:29 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 24 13:23:29 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 24 13:23:29 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 24 13:23:29 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 24 13:23:30 np0005533938 python3.9[107110]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:23:30 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v196: 321 pgs: 1 active+clean+scrubbing, 2 peering, 318 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 2 objects/s recovering
Nov 24 13:23:30 np0005533938 python3.9[107262]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:23:31 np0005533938 python3.9[107412]: ansible-ansible.builtin.service_facts Invoked
Nov 24 13:23:31 np0005533938 network[107429]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 13:23:31 np0005533938 network[107430]: 'network-scripts' will be removed from distribution in near future.
Nov 24 13:23:31 np0005533938 network[107431]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 13:23:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:23:32 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v197: 321 pgs: 1 active+clean+scrubbing, 2 peering, 318 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 24 13:23:33 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Nov 24 13:23:33 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Nov 24 13:23:33 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 24 13:23:33 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 24 13:23:33 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.16 deep-scrub starts
Nov 24 13:23:33 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.16 deep-scrub ok
Nov 24 13:23:34 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 24 13:23:34 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 24 13:23:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:23:34
Nov 24 13:23:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:23:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Some PGs (0.006231) are inactive; try again later
Nov 24 13:23:34 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v198: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 1 objects/s recovering
Nov 24 13:23:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 24 13:23:34 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 24 13:23:34 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 24 13:23:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:23:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:23:34 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 24 13:23:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:23:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:23:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:23:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:23:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:23:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:23:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:23:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:23:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:23:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:23:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:23:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:23:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:23:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:23:34 np0005533938 python3.9[107691]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:23:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 24 13:23:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 24 13:23:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 24 13:23:35 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 24 13:23:35 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 24 13:23:35 np0005533938 python3.9[107841]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:23:36 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 24 13:23:36 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v200: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:23:36 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 24 13:23:36 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 24 13:23:36 np0005533938 python3.9[107995]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:23:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 24 13:23:37 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 24 13:23:37 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 24 13:23:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 24 13:23:37 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 24 13:23:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:23:37 np0005533938 python3.9[108153]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 13:23:38 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 24 13:23:38 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Nov 24 13:23:38 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Nov 24 13:23:38 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v202: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:23:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Nov 24 13:23:38 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 24 13:23:38 np0005533938 python3.9[108237]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:23:39 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 24 13:23:39 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 24 13:23:39 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 24 13:23:39 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 24 13:23:39 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 24 13:23:39 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 24 13:23:39 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 24 13:23:40 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 24 13:23:40 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v204: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:23:40 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 24 13:23:40 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 24 13:23:41 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 24 13:23:41 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 24 13:23:41 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 24 13:23:41 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 24 13:23:41 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 24 13:23:41 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Nov 24 13:23:41 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Nov 24 13:23:42 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 24 13:23:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:23:42 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v206: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:23:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 24 13:23:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 24 13:23:42 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.f deep-scrub starts
Nov 24 13:23:42 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.f deep-scrub ok
Nov 24 13:23:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:23:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:23:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:23:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:23:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:23:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:23:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:23:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:23:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:23:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:23:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:23:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:23:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:23:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:23:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:23:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:23:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:23:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:23:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:23:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:23:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:23:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:23:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:23:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 24 13:23:43 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 24 13:23:43 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 24 13:23:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 24 13:23:43 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 24 13:23:43 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 92 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92 pruub=9.081287384s) [2] r=-1 lpr=92 pi=[67,92)/1 crt=55'385 mlcod 0'0 active pruub 198.269866943s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:43 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 92 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92 pruub=9.081098557s) [2] r=-1 lpr=92 pi=[67,92)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 198.269866943s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:43 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:43 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 24 13:23:43 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 24 13:23:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 24 13:23:44 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 24 13:23:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 24 13:23:44 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 24 13:23:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 93 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:44 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 93 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:44 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:44 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 24 13:23:44 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 24 13:23:44 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v209: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:23:45 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 24 13:23:45 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 24 13:23:45 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 24 13:23:45 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 94 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:45 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 24 13:23:45 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 24 13:23:45 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 24 13:23:45 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 24 13:23:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 24 13:23:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 24 13:23:46 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 24 13:23:46 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:46 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:46 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95 pruub=14.975742340s) [2] async=[2] r=-1 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 55'385 active pruub 207.216949463s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:46 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95 pruub=14.974552155s) [2] r=-1 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 207.216949463s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:46 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v212: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:23:46 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 24 13:23:46 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 24 13:23:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 24 13:23:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 24 13:23:47 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 24 13:23:47 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=95/96 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:47 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 24 13:23:47 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 24 13:23:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:23:47 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 24 13:23:47 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 24 13:23:48 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 24 13:23:48 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 24 13:23:48 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v214: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 233 B/s wr, 7 op/s; 50 B/s, 2 objects/s recovering
Nov 24 13:23:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 24 13:23:48 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 24 13:23:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 24 13:23:49 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 24 13:23:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 24 13:23:49 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 24 13:23:49 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 24 13:23:49 np0005533938 python3.9[108491]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:23:49 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 24 13:23:49 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 24 13:23:50 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 24 13:23:50 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 24 13:23:50 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 24 13:23:50 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v216: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 190 B/s wr, 6 op/s; 40 B/s, 1 objects/s recovering
Nov 24 13:23:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 24 13:23:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 24 13:23:50 np0005533938 python3.9[108778]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 24 13:23:51 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 24 13:23:51 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 24 13:23:51 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 24 13:23:51 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 24 13:23:51 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 24 13:23:51 np0005533938 python3.9[108930]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 24 13:23:51 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 24 13:23:51 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 24 13:23:52 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 24 13:23:52 np0005533938 python3.9[109082]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:23:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:23:52 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v218: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Nov 24 13:23:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Nov 24 13:23:52 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 24 13:23:52 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 24 13:23:52 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 24 13:23:53 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 24 13:23:53 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 24 13:23:53 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 24 13:23:53 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 24 13:23:53 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99 pruub=15.274039268s) [0] r=-1 lpr=99 pi=[75,99)/1 crt=55'385 mlcod 0'0 active pruub 200.589569092s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:53 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99 pruub=15.273898125s) [0] r=-1 lpr=99 pi=[75,99)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 200.589569092s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:53 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 24 13:23:53 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=0 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:53 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 98 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98 pruub=14.972748756s) [1] r=-1 lpr=98 pi=[67,98)/1 crt=55'385 mlcod 0'0 active pruub 214.270065308s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:53 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 99 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98 pruub=14.972686768s) [1] r=-1 lpr=98 pi=[67,98)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 214.270065308s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:53 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:53 np0005533938 python3.9[109234]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 24 13:23:53 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.3 deep-scrub starts
Nov 24 13:23:53 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.3 deep-scrub ok
Nov 24 13:23:53 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 24 13:23:53 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 24 13:23:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 24 13:23:54 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 24 13:23:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 24 13:23:54 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 24 13:23:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:54 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:54 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:54 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[75,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:54 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[75,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.c deep-scrub starts
Nov 24 13:23:54 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:54 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.c deep-scrub ok
Nov 24 13:23:54 np0005533938 python3.9[109386]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:23:54 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 24 13:23:54 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 24 13:23:54 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v221: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:23:55 np0005533938 python3.9[109538]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:23:55 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 24 13:23:55 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 24 13:23:55 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 24 13:23:55 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 24 13:23:55 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:55 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 24 13:23:55 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 101 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] async=[1] r=0 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:55 np0005533938 python3.9[109616]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:23:56 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 24 13:23:56 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 24 13:23:56 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 24 13:23:56 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102 pruub=15.020028114s) [1] async=[1] r=-1 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 55'385 active pruub 217.350143433s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:56 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102 pruub=15.019754410s) [1] r=-1 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 217.350143433s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:56 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:56 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:56 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102 pruub=15.005826950s) [0] async=[0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 55'385 active pruub 203.362533569s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:56 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102 pruub=15.005747795s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 203.362533569s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:23:56 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:23:56 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:23:56 np0005533938 python3.9[109768]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:23:56 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v224: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:23:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 24 13:23:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 24 13:23:57 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 24 13:23:57 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=102/103 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:57 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 24 13:23:57 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=102/103 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:23:57 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 24 13:23:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:23:57 np0005533938 python3.9[109922]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 24 13:23:58 np0005533938 python3.9[110075]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 24 13:23:58 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 24 13:23:58 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 24 13:23:58 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v226: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:23:59 np0005533938 python3.9[110228]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 13:23:59 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 24 13:23:59 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 24 13:23:59 np0005533938 python3.9[110380]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 24 13:23:59 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 24 13:23:59 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 24 13:24:00 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v227: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 5 op/s; 54 B/s, 1 objects/s recovering
Nov 24 13:24:00 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Nov 24 13:24:00 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 24 13:24:00 np0005533938 python3.9[110532]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:24:01 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 24 13:24:01 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 24 13:24:01 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 24 13:24:01 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 24 13:24:01 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 24 13:24:01 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 24 13:24:01 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 24 13:24:02 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 24 13:24:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:24:02 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v229: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 163 B/s wr, 5 op/s; 52 B/s, 1 objects/s recovering
Nov 24 13:24:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Nov 24 13:24:02 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 24 13:24:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 24 13:24:03 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 24 13:24:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 24 13:24:03 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 24 13:24:03 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 24 13:24:03 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.7 deep-scrub starts
Nov 24 13:24:03 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.7 deep-scrub ok
Nov 24 13:24:04 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 24 13:24:04 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 24 13:24:04 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 24 13:24:04 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v231: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 141 B/s wr, 4 op/s; 45 B/s, 1 objects/s recovering
Nov 24 13:24:04 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Nov 24 13:24:04 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 24 13:24:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:24:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:24:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:24:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:24:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:24:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:24:04 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 24 13:24:04 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 24 13:24:05 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 24 13:24:05 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 24 13:24:05 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 24 13:24:05 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 24 13:24:05 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 24 13:24:05 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 24 13:24:05 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 24 13:24:05 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 106 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106 pruub=10.653602600s) [2] r=-1 lpr=106 pi=[67,106)/1 crt=55'385 mlcod 0'0 active pruub 222.269836426s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:24:05 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 106 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106 pruub=10.653409958s) [2] r=-1 lpr=106 pi=[67,106)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 222.269836426s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:24:05 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:24:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 24 13:24:06 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 24 13:24:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 24 13:24:06 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 24 13:24:06 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:24:06 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:24:06 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 107 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:24:06 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 107 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=67/68 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:24:06 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v234: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:24:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 24 13:24:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 24 13:24:06 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.8 deep-scrub starts
Nov 24 13:24:06 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.8 deep-scrub ok
Nov 24 13:24:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 24 13:24:07 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 24 13:24:07 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 24 13:24:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 24 13:24:07 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 24 13:24:07 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 24 13:24:07 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 24 13:24:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:24:07 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 108 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:24:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 24 13:24:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 24 13:24:08 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 24 13:24:08 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109 pruub=15.457493782s) [2] async=[2] r=-1 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 55'385 active pruub 229.911041260s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:24:08 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109 pruub=15.457426071s) [2] r=-1 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 229.911041260s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:24:08 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 24 13:24:08 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:24:08 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:24:08 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v237: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:24:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 24 13:24:08 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 13:24:08 np0005533938 systemd[1]: session-34.scope: Deactivated successfully.
Nov 24 13:24:08 np0005533938 systemd[1]: session-34.scope: Consumed 19.867s CPU time.
Nov 24 13:24:08 np0005533938 systemd-logind[822]: Session 34 logged out. Waiting for processes to exit.
Nov 24 13:24:08 np0005533938 systemd-logind[822]: Removed session 34.
Nov 24 13:24:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 24 13:24:09 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 24 13:24:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 13:24:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 24 13:24:09 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 24 13:24:09 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=109/110 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:24:10 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 24 13:24:10 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.13 deep-scrub starts
Nov 24 13:24:10 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.13 deep-scrub ok
Nov 24 13:24:10 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v239: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:24:10 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 24 13:24:10 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 24 13:24:11 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 24 13:24:11 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 24 13:24:11 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 24 13:24:11 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 24 13:24:11 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 24 13:24:12 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 24 13:24:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:24:12 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v241: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:24:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 24 13:24:12 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 24 13:24:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 24 13:24:13 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 24 13:24:13 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 24 13:24:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 24 13:24:13 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 24 13:24:13 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 24 13:24:13 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 24 13:24:13 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111 pruub=10.604092598s) [0] r=-1 lpr=111 pi=[86,111)/1 crt=55'385 mlcod 0'0 active pruub 216.645172119s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:24:13 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111 pruub=10.603497505s) [0] r=-1 lpr=111 pi=[86,111)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.645172119s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:24:13 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 111 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=0 lpr=111 pi=[86,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:24:14 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 24 13:24:14 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 24 13:24:14 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 24 13:24:14 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 24 13:24:14 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:24:14 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:24:14 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:24:14 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[86,113)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:24:14 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v244: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 1 objects/s recovering
Nov 24 13:24:14 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 24 13:24:14 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 24 13:24:14 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.c deep-scrub starts
Nov 24 13:24:14 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.c deep-scrub ok
Nov 24 13:24:15 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 24 13:24:15 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 24 13:24:15 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 24 13:24:15 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 24 13:24:15 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 24 13:24:15 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114 pruub=9.023878098s) [0] r=-1 lpr=114 pi=[75,114)/1 crt=55'385 mlcod 0'0 active pruub 216.589828491s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:24:15 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114 pruub=9.023796082s) [0] r=-1 lpr=114 pi=[75,114)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.589828491s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:24:15 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:24:15 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:24:16 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:24:16 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:24:16 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:24:16 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:24:16 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:24:16 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 24 13:24:16 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:24:16 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 21c68669-acb9-4002-b25f-6f78cb1d900c does not exist
Nov 24 13:24:16 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 89fecb54-f04f-4492-aa56-fd00f3f17094 does not exist
Nov 24 13:24:16 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 44795734-8483-4a45-b997-6a4b1e01b443 does not exist
Nov 24 13:24:16 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:24:16 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:24:16 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 24 13:24:16 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 24 13:24:16 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:24:16 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115 pruub=14.998288155s) [0] async=[0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 55'385 active pruub 223.578338623s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:24:16 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:24:16 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115 pruub=14.998158455s) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 223.578338623s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:24:16 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:24:16 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 24 13:24:16 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:24:16 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:24:16 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:24:16 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:24:16 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:24:16 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:24:16 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:24:16 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:24:16 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v247: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 24 13:24:16 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 24 13:24:16 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 13:24:17 np0005533938 podman[110875]: 2025-11-24 18:24:17.020166487 +0000 UTC m=+0.038107948 container create 97ea05359d5d31fc60c9f695dcff7de9d2a07521347c5633d8eb6a0c0223fcc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_franklin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 13:24:17 np0005533938 systemd[1]: Started libpod-conmon-97ea05359d5d31fc60c9f695dcff7de9d2a07521347c5633d8eb6a0c0223fcc5.scope.
Nov 24 13:24:17 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:24:17 np0005533938 podman[110875]: 2025-11-24 18:24:17.003873662 +0000 UTC m=+0.021815143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:24:17 np0005533938 podman[110875]: 2025-11-24 18:24:17.098417831 +0000 UTC m=+0.116359312 container init 97ea05359d5d31fc60c9f695dcff7de9d2a07521347c5633d8eb6a0c0223fcc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:24:17 np0005533938 podman[110875]: 2025-11-24 18:24:17.105887927 +0000 UTC m=+0.123829388 container start 97ea05359d5d31fc60c9f695dcff7de9d2a07521347c5633d8eb6a0c0223fcc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 13:24:17 np0005533938 podman[110875]: 2025-11-24 18:24:17.108772309 +0000 UTC m=+0.126713800 container attach 97ea05359d5d31fc60c9f695dcff7de9d2a07521347c5633d8eb6a0c0223fcc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_franklin, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 13:24:17 np0005533938 laughing_franklin[110891]: 167 167
Nov 24 13:24:17 np0005533938 systemd[1]: libpod-97ea05359d5d31fc60c9f695dcff7de9d2a07521347c5633d8eb6a0c0223fcc5.scope: Deactivated successfully.
Nov 24 13:24:17 np0005533938 podman[110875]: 2025-11-24 18:24:17.110599324 +0000 UTC m=+0.128540785 container died 97ea05359d5d31fc60c9f695dcff7de9d2a07521347c5633d8eb6a0c0223fcc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 13:24:17 np0005533938 systemd[1]: var-lib-containers-storage-overlay-6712dbea73fa464bbdc9243bcc6f1568b5772b6e754f67b865eda569db959754-merged.mount: Deactivated successfully.
Nov 24 13:24:17 np0005533938 podman[110875]: 2025-11-24 18:24:17.153245694 +0000 UTC m=+0.171187155 container remove 97ea05359d5d31fc60c9f695dcff7de9d2a07521347c5633d8eb6a0c0223fcc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 13:24:17 np0005533938 systemd[1]: libpod-conmon-97ea05359d5d31fc60c9f695dcff7de9d2a07521347c5633d8eb6a0c0223fcc5.scope: Deactivated successfully.
Nov 24 13:24:17 np0005533938 podman[110916]: 2025-11-24 18:24:17.305083887 +0000 UTC m=+0.036646562 container create 6f50c0863af132268c180520c83d59356f28ca9af5033723b56bedc9ee7db519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_lichterman, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 13:24:17 np0005533938 systemd[1]: Started libpod-conmon-6f50c0863af132268c180520c83d59356f28ca9af5033723b56bedc9ee7db519.scope.
Nov 24 13:24:17 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:24:17 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f183c0bb769f95b475054ed7f7454f88d292cefe79850a418235ab26520d90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:24:17 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f183c0bb769f95b475054ed7f7454f88d292cefe79850a418235ab26520d90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:24:17 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f183c0bb769f95b475054ed7f7454f88d292cefe79850a418235ab26520d90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:24:17 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f183c0bb769f95b475054ed7f7454f88d292cefe79850a418235ab26520d90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:24:17 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f183c0bb769f95b475054ed7f7454f88d292cefe79850a418235ab26520d90/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:24:17 np0005533938 podman[110916]: 2025-11-24 18:24:17.287875639 +0000 UTC m=+0.019438334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:24:17 np0005533938 podman[110916]: 2025-11-24 18:24:17.387946916 +0000 UTC m=+0.119509621 container init 6f50c0863af132268c180520c83d59356f28ca9af5033723b56bedc9ee7db519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:24:17 np0005533938 podman[110916]: 2025-11-24 18:24:17.395148215 +0000 UTC m=+0.126710890 container start 6f50c0863af132268c180520c83d59356f28ca9af5033723b56bedc9ee7db519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_lichterman, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:24:17 np0005533938 podman[110916]: 2025-11-24 18:24:17.398252742 +0000 UTC m=+0.129815437 container attach 6f50c0863af132268c180520c83d59356f28ca9af5033723b56bedc9ee7db519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:24:17 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 24 13:24:17 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 24 13:24:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Nov 24 13:24:17 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 13:24:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Nov 24 13:24:17 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Nov 24 13:24:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:24:17 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:24:17 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:24:17 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 24 13:24:17 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=0 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:24:17 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116 pruub=8.010222435s) [1] r=-1 lpr=116 pi=[76,116)/1 crt=55'385 mlcod 0'0 active pruub 217.601470947s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:24:17 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116 pruub=8.010166168s) [1] r=-1 lpr=116 pi=[76,116)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 217.601470947s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:24:17 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:24:17 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:24:17 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 24 13:24:17 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 24 13:24:18 np0005533938 loving_lichterman[110933]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:24:18 np0005533938 loving_lichterman[110933]: --> relative data size: 1.0
Nov 24 13:24:18 np0005533938 loving_lichterman[110933]: --> All data devices are unavailable
Nov 24 13:24:18 np0005533938 systemd[1]: libpod-6f50c0863af132268c180520c83d59356f28ca9af5033723b56bedc9ee7db519.scope: Deactivated successfully.
Nov 24 13:24:18 np0005533938 podman[110916]: 2025-11-24 18:24:18.363890817 +0000 UTC m=+1.095453492 container died 6f50c0863af132268c180520c83d59356f28ca9af5033723b56bedc9ee7db519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_lichterman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:24:18 np0005533938 systemd[1]: var-lib-containers-storage-overlay-e1f183c0bb769f95b475054ed7f7454f88d292cefe79850a418235ab26520d90-merged.mount: Deactivated successfully.
Nov 24 13:24:18 np0005533938 podman[110916]: 2025-11-24 18:24:18.415298735 +0000 UTC m=+1.146861410 container remove 6f50c0863af132268c180520c83d59356f28ca9af5033723b56bedc9ee7db519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:24:18 np0005533938 systemd[1]: libpod-conmon-6f50c0863af132268c180520c83d59356f28ca9af5033723b56bedc9ee7db519.scope: Deactivated successfully.
Nov 24 13:24:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Nov 24 13:24:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Nov 24 13:24:18 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Nov 24 13:24:18 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:24:18 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 24 13:24:18 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:24:18 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117 pruub=14.991624832s) [0] async=[0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 55'385 active pruub 225.596588135s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:24:18 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117 pruub=14.991396904s) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 225.596588135s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:24:18 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:24:18 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:24:18 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v250: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 24 13:24:18 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:24:18 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:24:19 np0005533938 podman[111114]: 2025-11-24 18:24:19.01152592 +0000 UTC m=+0.083128656 container create ca2eec5fb90d0e283b3d1a68cdb1dbcfb5a1cde838bcf4694b4b7102e7d89188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_thompson, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 13:24:19 np0005533938 podman[111114]: 2025-11-24 18:24:18.951660823 +0000 UTC m=+0.023263579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:24:19 np0005533938 systemd[1]: Started libpod-conmon-ca2eec5fb90d0e283b3d1a68cdb1dbcfb5a1cde838bcf4694b4b7102e7d89188.scope.
Nov 24 13:24:19 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:24:19 np0005533938 podman[111114]: 2025-11-24 18:24:19.094633715 +0000 UTC m=+0.166236481 container init ca2eec5fb90d0e283b3d1a68cdb1dbcfb5a1cde838bcf4694b4b7102e7d89188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_thompson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 13:24:19 np0005533938 podman[111114]: 2025-11-24 18:24:19.101941237 +0000 UTC m=+0.173543973 container start ca2eec5fb90d0e283b3d1a68cdb1dbcfb5a1cde838bcf4694b4b7102e7d89188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_thompson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 13:24:19 np0005533938 podman[111114]: 2025-11-24 18:24:19.104989323 +0000 UTC m=+0.176592089 container attach ca2eec5fb90d0e283b3d1a68cdb1dbcfb5a1cde838bcf4694b4b7102e7d89188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 13:24:19 np0005533938 lucid_thompson[111130]: 167 167
Nov 24 13:24:19 np0005533938 systemd[1]: libpod-ca2eec5fb90d0e283b3d1a68cdb1dbcfb5a1cde838bcf4694b4b7102e7d89188.scope: Deactivated successfully.
Nov 24 13:24:19 np0005533938 podman[111114]: 2025-11-24 18:24:19.107784922 +0000 UTC m=+0.179387708 container died ca2eec5fb90d0e283b3d1a68cdb1dbcfb5a1cde838bcf4694b4b7102e7d89188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_thompson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 24 13:24:19 np0005533938 systemd[1]: var-lib-containers-storage-overlay-d5887d4f16e36538c64fc7d072ff42d42888efd586ec7404e3884c2eb1a70970-merged.mount: Deactivated successfully.
Nov 24 13:24:19 np0005533938 podman[111114]: 2025-11-24 18:24:19.146186436 +0000 UTC m=+0.217789172 container remove ca2eec5fb90d0e283b3d1a68cdb1dbcfb5a1cde838bcf4694b4b7102e7d89188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:24:19 np0005533938 systemd[1]: libpod-conmon-ca2eec5fb90d0e283b3d1a68cdb1dbcfb5a1cde838bcf4694b4b7102e7d89188.scope: Deactivated successfully.
Nov 24 13:24:19 np0005533938 podman[111154]: 2025-11-24 18:24:19.291589419 +0000 UTC m=+0.045784338 container create 30eef09e1aaf461606eb4a91c2fcd939c2c820be634b15f0c643d9d1ee7c82e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wozniak, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 13:24:19 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.1b deep-scrub starts
Nov 24 13:24:19 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.1b deep-scrub ok
Nov 24 13:24:19 np0005533938 systemd[1]: Started libpod-conmon-30eef09e1aaf461606eb4a91c2fcd939c2c820be634b15f0c643d9d1ee7c82e3.scope.
Nov 24 13:24:19 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:24:19 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60bb876d567beb8bc9a3acd25da2580caffb2404522d168ccca5fef2ecd9863a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:24:19 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60bb876d567beb8bc9a3acd25da2580caffb2404522d168ccca5fef2ecd9863a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:24:19 np0005533938 podman[111154]: 2025-11-24 18:24:19.272208038 +0000 UTC m=+0.026402967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:24:19 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60bb876d567beb8bc9a3acd25da2580caffb2404522d168ccca5fef2ecd9863a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:24:19 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60bb876d567beb8bc9a3acd25da2580caffb2404522d168ccca5fef2ecd9863a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:24:19 np0005533938 podman[111154]: 2025-11-24 18:24:19.383474153 +0000 UTC m=+0.137669092 container init 30eef09e1aaf461606eb4a91c2fcd939c2c820be634b15f0c643d9d1ee7c82e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 13:24:19 np0005533938 podman[111154]: 2025-11-24 18:24:19.391097922 +0000 UTC m=+0.145292831 container start 30eef09e1aaf461606eb4a91c2fcd939c2c820be634b15f0c643d9d1ee7c82e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 13:24:19 np0005533938 podman[111154]: 2025-11-24 18:24:19.393957743 +0000 UTC m=+0.148152662 container attach 30eef09e1aaf461606eb4a91c2fcd939c2c820be634b15f0c643d9d1ee7c82e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wozniak, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:24:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Nov 24 13:24:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Nov 24 13:24:19 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Nov 24 13:24:19 np0005533938 ceph-osd[88544]: osd.0 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]: {
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:    "0": [
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:        {
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "devices": [
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "/dev/loop3"
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            ],
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "lv_name": "ceph_lv0",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "lv_size": "21470642176",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "name": "ceph_lv0",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "tags": {
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.cluster_name": "ceph",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.crush_device_class": "",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.encrypted": "0",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.osd_id": "0",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.type": "block",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.vdo": "0"
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            },
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "type": "block",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "vg_name": "ceph_vg0"
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:        }
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:    ],
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:    "1": [
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:        {
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "devices": [
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "/dev/loop4"
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            ],
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "lv_name": "ceph_lv1",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "lv_size": "21470642176",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "name": "ceph_lv1",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "tags": {
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.cluster_name": "ceph",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.crush_device_class": "",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.encrypted": "0",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.osd_id": "1",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.type": "block",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.vdo": "0"
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            },
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "type": "block",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "vg_name": "ceph_vg1"
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:        }
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:    ],
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:    "2": [
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:        {
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "devices": [
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "/dev/loop5"
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            ],
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "lv_name": "ceph_lv2",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "lv_size": "21470642176",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "name": "ceph_lv2",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "tags": {
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.cluster_name": "ceph",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.crush_device_class": "",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.encrypted": "0",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.osd_id": "2",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.type": "block",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:                "ceph.vdo": "0"
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            },
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "type": "block",
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:            "vg_name": "ceph_vg2"
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:        }
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]:    ]
Nov 24 13:24:20 np0005533938 elated_wozniak[111170]: }
Nov 24 13:24:20 np0005533938 systemd[1]: libpod-30eef09e1aaf461606eb4a91c2fcd939c2c820be634b15f0c643d9d1ee7c82e3.scope: Deactivated successfully.
Nov 24 13:24:20 np0005533938 podman[111154]: 2025-11-24 18:24:20.16692746 +0000 UTC m=+0.921122369 container died 30eef09e1aaf461606eb4a91c2fcd939c2c820be634b15f0c643d9d1ee7c82e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wozniak, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 13:24:20 np0005533938 systemd[1]: var-lib-containers-storage-overlay-60bb876d567beb8bc9a3acd25da2580caffb2404522d168ccca5fef2ecd9863a-merged.mount: Deactivated successfully.
Nov 24 13:24:20 np0005533938 podman[111154]: 2025-11-24 18:24:20.219729542 +0000 UTC m=+0.973924461 container remove 30eef09e1aaf461606eb4a91c2fcd939c2c820be634b15f0c643d9d1ee7c82e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:24:20 np0005533938 systemd[1]: libpod-conmon-30eef09e1aaf461606eb4a91c2fcd939c2c820be634b15f0c643d9d1ee7c82e3.scope: Deactivated successfully.
Nov 24 13:24:20 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:24:20 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v252: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Nov 24 13:24:20 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Nov 24 13:24:20 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Nov 24 13:24:20 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Nov 24 13:24:20 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119 pruub=15.899600029s) [1] async=[1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 55'385 active pruub 228.537811279s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:24:20 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119 pruub=15.899522781s) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 228.537811279s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:24:20 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:24:20 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:24:20 np0005533938 podman[111328]: 2025-11-24 18:24:20.840409185 +0000 UTC m=+0.056727321 container create 9778e63ef33f23b43df4d52d58196c45d78484f52ec30f3ef03d8ac180b07a5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chatelet, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 13:24:20 np0005533938 systemd[1]: Started libpod-conmon-9778e63ef33f23b43df4d52d58196c45d78484f52ec30f3ef03d8ac180b07a5a.scope.
Nov 24 13:24:20 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:24:20 np0005533938 podman[111328]: 2025-11-24 18:24:20.819437774 +0000 UTC m=+0.035756000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:24:20 np0005533938 podman[111328]: 2025-11-24 18:24:20.92192742 +0000 UTC m=+0.138245576 container init 9778e63ef33f23b43df4d52d58196c45d78484f52ec30f3ef03d8ac180b07a5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:24:20 np0005533938 podman[111328]: 2025-11-24 18:24:20.928267058 +0000 UTC m=+0.144585194 container start 9778e63ef33f23b43df4d52d58196c45d78484f52ec30f3ef03d8ac180b07a5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:24:20 np0005533938 podman[111328]: 2025-11-24 18:24:20.931458167 +0000 UTC m=+0.147776303 container attach 9778e63ef33f23b43df4d52d58196c45d78484f52ec30f3ef03d8ac180b07a5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 13:24:20 np0005533938 brave_chatelet[111344]: 167 167
Nov 24 13:24:20 np0005533938 systemd[1]: libpod-9778e63ef33f23b43df4d52d58196c45d78484f52ec30f3ef03d8ac180b07a5a.scope: Deactivated successfully.
Nov 24 13:24:20 np0005533938 podman[111328]: 2025-11-24 18:24:20.933704643 +0000 UTC m=+0.150022779 container died 9778e63ef33f23b43df4d52d58196c45d78484f52ec30f3ef03d8ac180b07a5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chatelet, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:24:20 np0005533938 systemd[1]: var-lib-containers-storage-overlay-c51dabc286fae2b8da0d9c850f82fe72312d48b97db85a898b492dd87e347bdb-merged.mount: Deactivated successfully.
Nov 24 13:24:20 np0005533938 podman[111328]: 2025-11-24 18:24:20.977298036 +0000 UTC m=+0.193616172 container remove 9778e63ef33f23b43df4d52d58196c45d78484f52ec30f3ef03d8ac180b07a5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chatelet, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 13:24:20 np0005533938 systemd[1]: libpod-conmon-9778e63ef33f23b43df4d52d58196c45d78484f52ec30f3ef03d8ac180b07a5a.scope: Deactivated successfully.
Nov 24 13:24:21 np0005533938 podman[111369]: 2025-11-24 18:24:21.120013503 +0000 UTC m=+0.036474388 container create ecadfb7af729e145628ab0ff36d8ff8283946de8ae0a38c553311c334b4e57f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_khorana, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 13:24:21 np0005533938 systemd[1]: Started libpod-conmon-ecadfb7af729e145628ab0ff36d8ff8283946de8ae0a38c553311c334b4e57f5.scope.
Nov 24 13:24:21 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:24:21 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34b1457558d6c5232fa526cac33a432744b172ec46785763e261b40840e79d9c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:24:21 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34b1457558d6c5232fa526cac33a432744b172ec46785763e261b40840e79d9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:24:21 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34b1457558d6c5232fa526cac33a432744b172ec46785763e261b40840e79d9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:24:21 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34b1457558d6c5232fa526cac33a432744b172ec46785763e261b40840e79d9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:24:21 np0005533938 podman[111369]: 2025-11-24 18:24:21.178749372 +0000 UTC m=+0.095210287 container init ecadfb7af729e145628ab0ff36d8ff8283946de8ae0a38c553311c334b4e57f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_khorana, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:24:21 np0005533938 podman[111369]: 2025-11-24 18:24:21.186248688 +0000 UTC m=+0.102709583 container start ecadfb7af729e145628ab0ff36d8ff8283946de8ae0a38c553311c334b4e57f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 13:24:21 np0005533938 podman[111369]: 2025-11-24 18:24:21.190160806 +0000 UTC m=+0.106621721 container attach ecadfb7af729e145628ab0ff36d8ff8283946de8ae0a38c553311c334b4e57f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_khorana, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:24:21 np0005533938 podman[111369]: 2025-11-24 18:24:21.102232891 +0000 UTC m=+0.018693806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:24:21 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Nov 24 13:24:21 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Nov 24 13:24:21 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Nov 24 13:24:21 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=119/120 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:24:22 np0005533938 keen_khorana[111385]: {
Nov 24 13:24:22 np0005533938 keen_khorana[111385]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:24:22 np0005533938 keen_khorana[111385]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:24:22 np0005533938 keen_khorana[111385]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:24:22 np0005533938 keen_khorana[111385]:        "osd_id": 0,
Nov 24 13:24:22 np0005533938 keen_khorana[111385]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:24:22 np0005533938 keen_khorana[111385]:        "type": "bluestore"
Nov 24 13:24:22 np0005533938 keen_khorana[111385]:    },
Nov 24 13:24:22 np0005533938 keen_khorana[111385]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:24:22 np0005533938 keen_khorana[111385]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:24:22 np0005533938 keen_khorana[111385]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:24:22 np0005533938 keen_khorana[111385]:        "osd_id": 1,
Nov 24 13:24:22 np0005533938 keen_khorana[111385]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:24:22 np0005533938 keen_khorana[111385]:        "type": "bluestore"
Nov 24 13:24:22 np0005533938 keen_khorana[111385]:    },
Nov 24 13:24:22 np0005533938 keen_khorana[111385]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:24:22 np0005533938 keen_khorana[111385]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:24:22 np0005533938 keen_khorana[111385]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:24:22 np0005533938 keen_khorana[111385]:        "osd_id": 2,
Nov 24 13:24:22 np0005533938 keen_khorana[111385]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:24:22 np0005533938 keen_khorana[111385]:        "type": "bluestore"
Nov 24 13:24:22 np0005533938 keen_khorana[111385]:    }
Nov 24 13:24:22 np0005533938 keen_khorana[111385]: }
Nov 24 13:24:22 np0005533938 systemd[1]: libpod-ecadfb7af729e145628ab0ff36d8ff8283946de8ae0a38c553311c334b4e57f5.scope: Deactivated successfully.
Nov 24 13:24:22 np0005533938 podman[111418]: 2025-11-24 18:24:22.163728188 +0000 UTC m=+0.019307041 container died ecadfb7af729e145628ab0ff36d8ff8283946de8ae0a38c553311c334b4e57f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 13:24:22 np0005533938 systemd[1]: var-lib-containers-storage-overlay-34b1457558d6c5232fa526cac33a432744b172ec46785763e261b40840e79d9c-merged.mount: Deactivated successfully.
Nov 24 13:24:22 np0005533938 podman[111418]: 2025-11-24 18:24:22.219725979 +0000 UTC m=+0.075304842 container remove ecadfb7af729e145628ab0ff36d8ff8283946de8ae0a38c553311c334b4e57f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:24:22 np0005533938 systemd[1]: libpod-conmon-ecadfb7af729e145628ab0ff36d8ff8283946de8ae0a38c553311c334b4e57f5.scope: Deactivated successfully.
Nov 24 13:24:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:24:22 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:24:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:24:22 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:24:22 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 319a3bcf-1804-4c9c-abdd-eced4dd8ff49 does not exist
Nov 24 13:24:22 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev bfc609cd-fe3b-4acf-a683-3035893c7017 does not exist
Nov 24 13:24:22 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.9 deep-scrub starts
Nov 24 13:24:22 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.9 deep-scrub ok
Nov 24 13:24:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:24:22 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 24 13:24:22 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 24 13:24:22 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v255: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 24 13:24:22 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:24:22 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:24:23 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 24 13:24:23 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 24 13:24:24 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 24 13:24:24 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 24 13:24:24 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v256: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Nov 24 13:24:24 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 24 13:24:24 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 24 13:24:25 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 24 13:24:25 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 24 13:24:26 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Nov 24 13:24:26 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Nov 24 13:24:26 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v257: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Nov 24 13:24:27 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 24 13:24:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:24:27 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 24 13:24:28 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v258: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 24 13:24:29 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 24 13:24:29 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 24 13:24:30 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v259: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 0 objects/s recovering
Nov 24 13:24:30 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 24 13:24:30 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 24 13:24:31 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Nov 24 13:24:31 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Nov 24 13:24:31 np0005533938 systemd-logind[822]: New session 35 of user zuul.
Nov 24 13:24:31 np0005533938 systemd[1]: Started Session 35 of User zuul.
Nov 24 13:24:32 np0005533938 python3.9[111636]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 24 13:24:32 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 24 13:24:32 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 24 13:24:32 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 24 13:24:32 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 24 13:24:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:24:32 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v260: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Nov 24 13:24:32 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 24 13:24:32 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 24 13:24:33 np0005533938 python3.9[111810]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:24:34 np0005533938 python3.9[111966]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:24:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:24:34
Nov 24 13:24:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:24:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:24:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['default.rgw.log', 'images', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control']
Nov 24 13:24:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:24:34 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v261: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Nov 24 13:24:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:24:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:24:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:24:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:24:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:24:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:24:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:24:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:24:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:25:11 np0005533938 rsyslogd[1008]: imjournal: 313 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 24 13:25:11 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.1 deep-scrub starts
Nov 24 13:25:11 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.1 deep-scrub ok
Nov 24 13:25:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:25:12 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v280: 321 pgs: 321 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:25:13 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Nov 24 13:25:13 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Nov 24 13:25:14 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 24 13:25:14 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 24 13:25:14 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.e deep-scrub starts
Nov 24 13:25:14 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.e deep-scrub ok
Nov 24 13:25:14 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v281: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:25:14 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Nov 24 13:25:14 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Nov 24 13:25:15 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.a scrub starts
Nov 24 13:25:15 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.a scrub ok
Nov 24 13:25:16 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v282: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:25:16 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 24 13:25:16 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 24 13:25:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:25:17 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Nov 24 13:25:17 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Nov 24 13:25:17 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 24 13:25:17 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 24 13:25:17 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 24 13:25:17 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 24 13:25:18 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v283: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:25:18 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 24 13:25:18 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 24 13:25:19 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Nov 24 13:25:19 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Nov 24 13:25:19 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Nov 24 13:25:19 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Nov 24 13:25:20 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v284: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:25:20 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.e scrub starts
Nov 24 13:25:20 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 11.e scrub ok
Nov 24 13:25:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:25:22 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v285: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:25:22 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Nov 24 13:25:22 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Nov 24 13:25:22 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Nov 24 13:25:22 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Nov 24 13:25:22 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 24 13:25:22 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 24 13:25:23 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:25:23 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:25:23 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:25:23 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:25:23 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:25:23 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:25:23 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 7166ba42-589a-4ec5-b9fc-eea097202db6 does not exist
Nov 24 13:25:23 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 9702d538-64cd-452e-98d7-ab0c74be62b2 does not exist
Nov 24 13:25:23 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 765e62f6-22fd-4519-be66-d715617981d5 does not exist
Nov 24 13:25:23 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:25:23 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:25:23 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:25:23 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:25:23 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:25:23 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:25:23 np0005533938 podman[116003]: 2025-11-24 18:25:23.608701126 +0000 UTC m=+0.041664157 container create a363bae351f83d052411f9da6163f3de97c33e57852310dddb6667ff398fe6fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 24 13:25:23 np0005533938 systemd[1]: Started libpod-conmon-a363bae351f83d052411f9da6163f3de97c33e57852310dddb6667ff398fe6fc.scope.
Nov 24 13:25:23 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:25:23 np0005533938 podman[116003]: 2025-11-24 18:25:23.592805431 +0000 UTC m=+0.025768492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:25:23 np0005533938 podman[116003]: 2025-11-24 18:25:23.686349776 +0000 UTC m=+0.119312867 container init a363bae351f83d052411f9da6163f3de97c33e57852310dddb6667ff398fe6fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:25:23 np0005533938 podman[116003]: 2025-11-24 18:25:23.693988406 +0000 UTC m=+0.126951447 container start a363bae351f83d052411f9da6163f3de97c33e57852310dddb6667ff398fe6fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 13:25:23 np0005533938 podman[116003]: 2025-11-24 18:25:23.697620577 +0000 UTC m=+0.130583628 container attach a363bae351f83d052411f9da6163f3de97c33e57852310dddb6667ff398fe6fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:25:23 np0005533938 cranky_payne[116019]: 167 167
Nov 24 13:25:23 np0005533938 systemd[1]: libpod-a363bae351f83d052411f9da6163f3de97c33e57852310dddb6667ff398fe6fc.scope: Deactivated successfully.
Nov 24 13:25:23 np0005533938 podman[116003]: 2025-11-24 18:25:23.700835596 +0000 UTC m=+0.133798647 container died a363bae351f83d052411f9da6163f3de97c33e57852310dddb6667ff398fe6fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:25:23 np0005533938 systemd[1]: var-lib-containers-storage-overlay-bca856c7ac6aee16e1cf39792e957e24f334448d527ae7b5d618801fd2aaa209-merged.mount: Deactivated successfully.
Nov 24 13:25:23 np0005533938 podman[116003]: 2025-11-24 18:25:23.737091898 +0000 UTC m=+0.170054939 container remove a363bae351f83d052411f9da6163f3de97c33e57852310dddb6667ff398fe6fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:25:23 np0005533938 systemd[1]: libpod-conmon-a363bae351f83d052411f9da6163f3de97c33e57852310dddb6667ff398fe6fc.scope: Deactivated successfully.
Nov 24 13:25:23 np0005533938 podman[116043]: 2025-11-24 18:25:23.894308976 +0000 UTC m=+0.039329139 container create 38b44d80568d396ff6c619a4804f830d7b2400db99ac9a25249446e94865e39e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:25:23 np0005533938 systemd[1]: Started libpod-conmon-38b44d80568d396ff6c619a4804f830d7b2400db99ac9a25249446e94865e39e.scope.
Nov 24 13:25:23 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1b deep-scrub starts
Nov 24 13:25:23 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1b deep-scrub ok
Nov 24 13:25:23 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:25:23 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa9a32f217aca668618b4be74770495ec11853c533bb9741c646092d908f8209/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:25:23 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa9a32f217aca668618b4be74770495ec11853c533bb9741c646092d908f8209/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:25:23 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa9a32f217aca668618b4be74770495ec11853c533bb9741c646092d908f8209/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:25:23 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa9a32f217aca668618b4be74770495ec11853c533bb9741c646092d908f8209/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:25:23 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa9a32f217aca668618b4be74770495ec11853c533bb9741c646092d908f8209/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:25:23 np0005533938 podman[116043]: 2025-11-24 18:25:23.967925046 +0000 UTC m=+0.112945239 container init 38b44d80568d396ff6c619a4804f830d7b2400db99ac9a25249446e94865e39e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_cerf, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:25:23 np0005533938 podman[116043]: 2025-11-24 18:25:23.875073408 +0000 UTC m=+0.020093611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:25:23 np0005533938 podman[116043]: 2025-11-24 18:25:23.977845863 +0000 UTC m=+0.122866036 container start 38b44d80568d396ff6c619a4804f830d7b2400db99ac9a25249446e94865e39e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_cerf, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:25:23 np0005533938 podman[116043]: 2025-11-24 18:25:23.980879298 +0000 UTC m=+0.125899501 container attach 38b44d80568d396ff6c619a4804f830d7b2400db99ac9a25249446e94865e39e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 13:25:24 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:25:24 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:25:24 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:25:24 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 24 13:25:24 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v286: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:25:24 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 24 13:25:24 np0005533938 strange_cerf[116060]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:25:24 np0005533938 strange_cerf[116060]: --> relative data size: 1.0
Nov 24 13:25:24 np0005533938 strange_cerf[116060]: --> All data devices are unavailable
Nov 24 13:25:25 np0005533938 systemd[1]: libpod-38b44d80568d396ff6c619a4804f830d7b2400db99ac9a25249446e94865e39e.scope: Deactivated successfully.
Nov 24 13:25:25 np0005533938 conmon[116060]: conmon 38b44d80568d396ff6c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-38b44d80568d396ff6c619a4804f830d7b2400db99ac9a25249446e94865e39e.scope/container/memory.events
Nov 24 13:25:25 np0005533938 podman[116043]: 2025-11-24 18:25:25.030195393 +0000 UTC m=+1.175215576 container died 38b44d80568d396ff6c619a4804f830d7b2400db99ac9a25249446e94865e39e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_cerf, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:25:25 np0005533938 systemd[1]: var-lib-containers-storage-overlay-aa9a32f217aca668618b4be74770495ec11853c533bb9741c646092d908f8209-merged.mount: Deactivated successfully.
Nov 24 13:25:25 np0005533938 podman[116043]: 2025-11-24 18:25:25.090327488 +0000 UTC m=+1.235347661 container remove 38b44d80568d396ff6c619a4804f830d7b2400db99ac9a25249446e94865e39e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_cerf, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:25:25 np0005533938 systemd[1]: libpod-conmon-38b44d80568d396ff6c619a4804f830d7b2400db99ac9a25249446e94865e39e.scope: Deactivated successfully.
Nov 24 13:25:25 np0005533938 podman[116248]: 2025-11-24 18:25:25.706472706 +0000 UTC m=+0.041790980 container create 1d566b7f3d4261596b82fbc6851f1b01fdd47bc5281f1eb1523da4f8af1f2cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poincare, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:25:25 np0005533938 systemd[1]: Started libpod-conmon-1d566b7f3d4261596b82fbc6851f1b01fdd47bc5281f1eb1523da4f8af1f2cfa.scope.
Nov 24 13:25:25 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:25:25 np0005533938 podman[116248]: 2025-11-24 18:25:25.689844522 +0000 UTC m=+0.025162786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:25:25 np0005533938 podman[116248]: 2025-11-24 18:25:25.789396397 +0000 UTC m=+0.124714681 container init 1d566b7f3d4261596b82fbc6851f1b01fdd47bc5281f1eb1523da4f8af1f2cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:25:25 np0005533938 podman[116248]: 2025-11-24 18:25:25.796850643 +0000 UTC m=+0.132168877 container start 1d566b7f3d4261596b82fbc6851f1b01fdd47bc5281f1eb1523da4f8af1f2cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poincare, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:25:25 np0005533938 podman[116248]: 2025-11-24 18:25:25.800047212 +0000 UTC m=+0.135365506 container attach 1d566b7f3d4261596b82fbc6851f1b01fdd47bc5281f1eb1523da4f8af1f2cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:25:25 np0005533938 vibrant_poincare[116264]: 167 167
Nov 24 13:25:25 np0005533938 systemd[1]: libpod-1d566b7f3d4261596b82fbc6851f1b01fdd47bc5281f1eb1523da4f8af1f2cfa.scope: Deactivated successfully.
Nov 24 13:25:25 np0005533938 podman[116248]: 2025-11-24 18:25:25.802577015 +0000 UTC m=+0.137895259 container died 1d566b7f3d4261596b82fbc6851f1b01fdd47bc5281f1eb1523da4f8af1f2cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 13:25:25 np0005533938 systemd[1]: var-lib-containers-storage-overlay-e65217805defbe0f795a9e85379894c43f536d6a5a714679e95e67f51a0ffe0d-merged.mount: Deactivated successfully.
Nov 24 13:25:25 np0005533938 podman[116248]: 2025-11-24 18:25:25.855108971 +0000 UTC m=+0.190427255 container remove 1d566b7f3d4261596b82fbc6851f1b01fdd47bc5281f1eb1523da4f8af1f2cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_poincare, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 24 13:25:25 np0005533938 systemd[1]: libpod-conmon-1d566b7f3d4261596b82fbc6851f1b01fdd47bc5281f1eb1523da4f8af1f2cfa.scope: Deactivated successfully.
Nov 24 13:25:26 np0005533938 podman[116288]: 2025-11-24 18:25:26.046100119 +0000 UTC m=+0.053326157 container create 0786bb67dea9ba99e8550c320e43915d4ebacfa3d8b7b86be2a431ecd2b66c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nash, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 13:25:26 np0005533938 systemd[1]: Started libpod-conmon-0786bb67dea9ba99e8550c320e43915d4ebacfa3d8b7b86be2a431ecd2b66c29.scope.
Nov 24 13:25:26 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:25:26 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f560a4cc31bab8faf509c96f699cc2b83fa70a9343fb3201b1e8e9379ef89f53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:25:26 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f560a4cc31bab8faf509c96f699cc2b83fa70a9343fb3201b1e8e9379ef89f53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:25:26 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f560a4cc31bab8faf509c96f699cc2b83fa70a9343fb3201b1e8e9379ef89f53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:25:26 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f560a4cc31bab8faf509c96f699cc2b83fa70a9343fb3201b1e8e9379ef89f53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:25:26 np0005533938 podman[116288]: 2025-11-24 18:25:26.017863907 +0000 UTC m=+0.025089955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:25:26 np0005533938 podman[116288]: 2025-11-24 18:25:26.127222736 +0000 UTC m=+0.134448804 container init 0786bb67dea9ba99e8550c320e43915d4ebacfa3d8b7b86be2a431ecd2b66c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:25:26 np0005533938 podman[116288]: 2025-11-24 18:25:26.144716951 +0000 UTC m=+0.151942999 container start 0786bb67dea9ba99e8550c320e43915d4ebacfa3d8b7b86be2a431ecd2b66c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:25:26 np0005533938 podman[116288]: 2025-11-24 18:25:26.148995117 +0000 UTC m=+0.156341008 container attach 0786bb67dea9ba99e8550c320e43915d4ebacfa3d8b7b86be2a431ecd2b66c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nash, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 24 13:25:26 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Nov 24 13:25:26 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v287: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:25:26 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Nov 24 13:25:26 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Nov 24 13:25:26 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]: {
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:    "0": [
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:        {
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "devices": [
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "/dev/loop3"
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            ],
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "lv_name": "ceph_lv0",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "lv_size": "21470642176",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "name": "ceph_lv0",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "tags": {
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.cluster_name": "ceph",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.crush_device_class": "",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.encrypted": "0",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.osd_id": "0",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.type": "block",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.vdo": "0"
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            },
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "type": "block",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "vg_name": "ceph_vg0"
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:        }
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:    ],
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:    "1": [
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:        {
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "devices": [
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "/dev/loop4"
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            ],
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "lv_name": "ceph_lv1",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "lv_size": "21470642176",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "name": "ceph_lv1",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "tags": {
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.cluster_name": "ceph",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.crush_device_class": "",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.encrypted": "0",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.osd_id": "1",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.type": "block",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.vdo": "0"
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            },
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "type": "block",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "vg_name": "ceph_vg1"
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:        }
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:    ],
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:    "2": [
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:        {
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "devices": [
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "/dev/loop5"
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            ],
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "lv_name": "ceph_lv2",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "lv_size": "21470642176",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "name": "ceph_lv2",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "tags": {
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.cluster_name": "ceph",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.crush_device_class": "",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.encrypted": "0",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.osd_id": "2",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.type": "block",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:                "ceph.vdo": "0"
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            },
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "type": "block",
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:            "vg_name": "ceph_vg2"
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:        }
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]:    ]
Nov 24 13:25:26 np0005533938 flamboyant_nash[116306]: }
Nov 24 13:25:26 np0005533938 systemd[1]: libpod-0786bb67dea9ba99e8550c320e43915d4ebacfa3d8b7b86be2a431ecd2b66c29.scope: Deactivated successfully.
Nov 24 13:25:26 np0005533938 podman[116288]: 2025-11-24 18:25:26.920455165 +0000 UTC m=+0.927681173 container died 0786bb67dea9ba99e8550c320e43915d4ebacfa3d8b7b86be2a431ecd2b66c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nash, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:25:26 np0005533938 systemd[1]: var-lib-containers-storage-overlay-f560a4cc31bab8faf509c96f699cc2b83fa70a9343fb3201b1e8e9379ef89f53-merged.mount: Deactivated successfully.
Nov 24 13:25:26 np0005533938 podman[116288]: 2025-11-24 18:25:26.971281349 +0000 UTC m=+0.978507357 container remove 0786bb67dea9ba99e8550c320e43915d4ebacfa3d8b7b86be2a431ecd2b66c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nash, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:25:26 np0005533938 systemd[1]: libpod-conmon-0786bb67dea9ba99e8550c320e43915d4ebacfa3d8b7b86be2a431ecd2b66c29.scope: Deactivated successfully.
Nov 24 13:25:27 np0005533938 podman[116479]: 2025-11-24 18:25:27.504078234 +0000 UTC m=+0.034478868 container create c44fc5e1b021f22abf8c6f0bd3f7cd6e6c0edbcbf2698fd6fd075ed6b3f913cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 13:25:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:25:27 np0005533938 systemd[1]: Started libpod-conmon-c44fc5e1b021f22abf8c6f0bd3f7cd6e6c0edbcbf2698fd6fd075ed6b3f913cf.scope.
Nov 24 13:25:27 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:25:27 np0005533938 podman[116479]: 2025-11-24 18:25:27.571434189 +0000 UTC m=+0.101834863 container init c44fc5e1b021f22abf8c6f0bd3f7cd6e6c0edbcbf2698fd6fd075ed6b3f913cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 13:25:27 np0005533938 podman[116479]: 2025-11-24 18:25:27.576481914 +0000 UTC m=+0.106882558 container start c44fc5e1b021f22abf8c6f0bd3f7cd6e6c0edbcbf2698fd6fd075ed6b3f913cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_yalow, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:25:27 np0005533938 podman[116479]: 2025-11-24 18:25:27.579056158 +0000 UTC m=+0.109456822 container attach c44fc5e1b021f22abf8c6f0bd3f7cd6e6c0edbcbf2698fd6fd075ed6b3f913cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 13:25:27 np0005533938 naughty_yalow[116495]: 167 167
Nov 24 13:25:27 np0005533938 systemd[1]: libpod-c44fc5e1b021f22abf8c6f0bd3f7cd6e6c0edbcbf2698fd6fd075ed6b3f913cf.scope: Deactivated successfully.
Nov 24 13:25:27 np0005533938 podman[116479]: 2025-11-24 18:25:27.583230112 +0000 UTC m=+0.113630756 container died c44fc5e1b021f22abf8c6f0bd3f7cd6e6c0edbcbf2698fd6fd075ed6b3f913cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_yalow, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:25:27 np0005533938 podman[116479]: 2025-11-24 18:25:27.489935243 +0000 UTC m=+0.020335907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:25:27 np0005533938 systemd[1]: var-lib-containers-storage-overlay-97c31b61c69bddc5186830d0cb49db872638d16a7fcb06df947625a639c48777-merged.mount: Deactivated successfully.
Nov 24 13:25:27 np0005533938 podman[116479]: 2025-11-24 18:25:27.613891564 +0000 UTC m=+0.144292208 container remove c44fc5e1b021f22abf8c6f0bd3f7cd6e6c0edbcbf2698fd6fd075ed6b3f913cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 13:25:27 np0005533938 systemd[1]: libpod-conmon-c44fc5e1b021f22abf8c6f0bd3f7cd6e6c0edbcbf2698fd6fd075ed6b3f913cf.scope: Deactivated successfully.
Nov 24 13:25:27 np0005533938 podman[116519]: 2025-11-24 18:25:27.758884769 +0000 UTC m=+0.050074526 container create 1f89d947d1fa8fd1fae876399d4d22b9168b4ff51be93e962d0edd293dd19a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:25:27 np0005533938 systemd[1]: Started libpod-conmon-1f89d947d1fa8fd1fae876399d4d22b9168b4ff51be93e962d0edd293dd19a1d.scope.
Nov 24 13:25:27 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:25:27 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be9ca8e276597f7ab863b81c48200cf36d67930309351c5d9a589c01e51693a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:25:27 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be9ca8e276597f7ab863b81c48200cf36d67930309351c5d9a589c01e51693a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:25:27 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be9ca8e276597f7ab863b81c48200cf36d67930309351c5d9a589c01e51693a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:25:27 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be9ca8e276597f7ab863b81c48200cf36d67930309351c5d9a589c01e51693a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:25:27 np0005533938 podman[116519]: 2025-11-24 18:25:27.733771624 +0000 UTC m=+0.024961461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:25:27 np0005533938 podman[116519]: 2025-11-24 18:25:27.834705884 +0000 UTC m=+0.125895711 container init 1f89d947d1fa8fd1fae876399d4d22b9168b4ff51be93e962d0edd293dd19a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 13:25:27 np0005533938 podman[116519]: 2025-11-24 18:25:27.846721892 +0000 UTC m=+0.137911679 container start 1f89d947d1fa8fd1fae876399d4d22b9168b4ff51be93e962d0edd293dd19a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 13:25:27 np0005533938 podman[116519]: 2025-11-24 18:25:27.850597949 +0000 UTC m=+0.141787786 container attach 1f89d947d1fa8fd1fae876399d4d22b9168b4ff51be93e962d0edd293dd19a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Nov 24 13:25:28 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v288: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:25:28 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Nov 24 13:25:28 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Nov 24 13:25:28 np0005533938 ecstatic_galois[116535]: {
Nov 24 13:25:28 np0005533938 ecstatic_galois[116535]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:25:28 np0005533938 ecstatic_galois[116535]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:25:28 np0005533938 ecstatic_galois[116535]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:25:28 np0005533938 ecstatic_galois[116535]:        "osd_id": 0,
Nov 24 13:25:28 np0005533938 ecstatic_galois[116535]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:25:28 np0005533938 ecstatic_galois[116535]:        "type": "bluestore"
Nov 24 13:25:28 np0005533938 ecstatic_galois[116535]:    },
Nov 24 13:25:28 np0005533938 ecstatic_galois[116535]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:25:28 np0005533938 ecstatic_galois[116535]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:25:28 np0005533938 ecstatic_galois[116535]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:25:28 np0005533938 ecstatic_galois[116535]:        "osd_id": 1,
Nov 24 13:25:28 np0005533938 ecstatic_galois[116535]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:25:28 np0005533938 ecstatic_galois[116535]:        "type": "bluestore"
Nov 24 13:25:28 np0005533938 ecstatic_galois[116535]:    },
Nov 24 13:25:28 np0005533938 ecstatic_galois[116535]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:25:28 np0005533938 ecstatic_galois[116535]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:25:28 np0005533938 ecstatic_galois[116535]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:25:28 np0005533938 ecstatic_galois[116535]:        "osd_id": 2,
Nov 24 13:25:28 np0005533938 ecstatic_galois[116535]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:25:28 np0005533938 ecstatic_galois[116535]:        "type": "bluestore"
Nov 24 13:25:28 np0005533938 ecstatic_galois[116535]:    }
Nov 24 13:25:28 np0005533938 ecstatic_galois[116535]: }
Nov 24 13:25:28 np0005533938 systemd[1]: libpod-1f89d947d1fa8fd1fae876399d4d22b9168b4ff51be93e962d0edd293dd19a1d.scope: Deactivated successfully.
Nov 24 13:25:28 np0005533938 podman[116519]: 2025-11-24 18:25:28.762837396 +0000 UTC m=+1.054027183 container died 1f89d947d1fa8fd1fae876399d4d22b9168b4ff51be93e962d0edd293dd19a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_galois, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:25:28 np0005533938 systemd[1]: var-lib-containers-storage-overlay-be9ca8e276597f7ab863b81c48200cf36d67930309351c5d9a589c01e51693a0-merged.mount: Deactivated successfully.
Nov 24 13:25:28 np0005533938 podman[116519]: 2025-11-24 18:25:28.831005331 +0000 UTC m=+1.122195088 container remove 1f89d947d1fa8fd1fae876399d4d22b9168b4ff51be93e962d0edd293dd19a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_galois, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:25:28 np0005533938 systemd[1]: libpod-conmon-1f89d947d1fa8fd1fae876399d4d22b9168b4ff51be93e962d0edd293dd19a1d.scope: Deactivated successfully.
Nov 24 13:25:28 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:25:28 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:25:28 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:25:28 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:25:28 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev e8ef98df-9623-4830-b837-788c64e5968d does not exist
Nov 24 13:25:28 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev d2ed7a39-3a7f-4b7c-8303-bce647536687 does not exist
Nov 24 13:25:28 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 24 13:25:28 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 24 13:25:29 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Nov 24 13:25:29 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Nov 24 13:25:29 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:25:29 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:25:29 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 24 13:25:29 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 24 13:25:30 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v289: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:25:30 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 24 13:25:30 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 24 13:25:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:25:32 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v290: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:25:33 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 24 13:25:33 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 24 13:25:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:25:34
Nov 24 13:25:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:25:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:25:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'default.rgw.control', 'vms', '.mgr', 'images', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log']
Nov 24 13:25:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:25:34 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v291: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:25:34 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.14 deep-scrub starts
Nov 24 13:25:34 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.14 deep-scrub ok
Nov 24 13:25:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:25:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:25:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:25:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:25:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:25:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:25:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:25:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:25:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:25:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:25:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:25:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:25:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:25:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:25:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:25:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:25:35 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 24 13:25:35 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 24 13:25:36 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Nov 24 13:25:36 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Nov 24 13:25:36 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v292: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:25:37 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 24 13:25:37 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 24 13:25:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:25:37 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.d scrub starts
Nov 24 13:25:37 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.d scrub ok
Nov 24 13:25:38 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 24 13:25:38 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 24 13:25:38 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v293: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:25:39 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 24 13:25:39 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 24 13:25:40 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 24 13:25:40 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 24 13:25:40 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Nov 24 13:25:40 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Nov 24 13:25:40 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v294: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:25:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:25:42 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v295: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:25:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:25:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:25:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:25:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:25:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:25:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:25:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:25:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:25:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:25:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:25:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:25:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:25:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:25:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:25:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:25:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:25:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:25:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:25:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:25:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:25:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:25:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:25:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:25:44 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v296: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:25:45 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Nov 24 13:25:45 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Nov 24 13:25:46 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Nov 24 13:25:46 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Nov 24 13:25:46 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v297: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:25:47 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Nov 24 13:25:47 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Nov 24 13:25:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:25:48.016868) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008748017023, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7217, "num_deletes": 251, "total_data_size": 8863174, "memory_usage": 9069200, "flush_reason": "Manual Compaction"}
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008748049643, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7143782, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 132, "largest_seqno": 7346, "table_properties": {"data_size": 7117496, "index_size": 17019, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8197, "raw_key_size": 75662, "raw_average_key_size": 23, "raw_value_size": 7055120, "raw_average_value_size": 2167, "num_data_blocks": 747, "num_entries": 3255, "num_filter_entries": 3255, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008327, "oldest_key_time": 1764008327, "file_creation_time": 1764008748, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 32864 microseconds, and 14097 cpu microseconds.
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:25:48.049734) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7143782 bytes OK
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:25:48.049783) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:25:48.051111) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:25:48.051124) EVENT_LOG_v1 {"time_micros": 1764008748051120, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:25:48.051140) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 8831760, prev total WAL file size 8831760, number of live WAL files 2.
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:25:48.053175) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(6976KB) 13(50KB) 8(1944B)]
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008748053321, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7197520, "oldest_snapshot_seqno": -1}
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3067 keys, 7154591 bytes, temperature: kUnknown
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008748089967, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7154591, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7128810, "index_size": 17031, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7685, "raw_key_size": 73639, "raw_average_key_size": 24, "raw_value_size": 7068103, "raw_average_value_size": 2304, "num_data_blocks": 749, "num_entries": 3067, "num_filter_entries": 3067, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764008748, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:25:48.090375) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7154591 bytes
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:25:48.091570) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.0 rd, 193.8 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(6.9, 0.0 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3356, records dropped: 289 output_compression: NoCompression
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:25:48.091591) EVENT_LOG_v1 {"time_micros": 1764008748091581, "job": 4, "event": "compaction_finished", "compaction_time_micros": 36910, "compaction_time_cpu_micros": 20479, "output_level": 6, "num_output_files": 1, "total_output_size": 7154591, "num_input_records": 3356, "num_output_records": 3067, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008748092986, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008748093037, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008748093192, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 24 13:25:48 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:25:48.053040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:25:48 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 24 13:25:48 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 24 13:25:48 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v298: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:25:50 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 24 13:25:50 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 24 13:25:50 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v299: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:25:51 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.e scrub starts
Nov 24 13:25:51 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.e scrub ok
Nov 24 13:25:51 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 24 13:25:51 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 24 13:25:52 np0005533938 systemd[1]: session-35.scope: Deactivated successfully.
Nov 24 13:25:52 np0005533938 systemd[1]: session-35.scope: Consumed 28.334s CPU time.
Nov 24 13:25:52 np0005533938 systemd-logind[822]: Session 35 logged out. Waiting for processes to exit.
Nov 24 13:25:52 np0005533938 systemd-logind[822]: Removed session 35.
Nov 24 13:25:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:25:52 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Nov 24 13:25:52 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v300: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:25:52 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Nov 24 13:25:54 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v301: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:25:54 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 24 13:25:54 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 24 13:25:55 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 24 13:25:55 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 24 13:25:56 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 24 13:25:56 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 24 13:25:56 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v302: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:25:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:25:57 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 24 13:25:57 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 24 13:25:58 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v303: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:25:58 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Nov 24 13:25:58 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Nov 24 13:26:00 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 24 13:26:00 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 24 13:26:00 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v304: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:01 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Nov 24 13:26:01 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Nov 24 13:26:02 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Nov 24 13:26:02 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Nov 24 13:26:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:26:02 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v305: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:03 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.a scrub starts
Nov 24 13:26:03 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.a scrub ok
Nov 24 13:26:04 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v306: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:26:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:26:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:26:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:26:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:26:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:26:04 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 24 13:26:04 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 24 13:26:06 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v307: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:06 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.c scrub starts
Nov 24 13:26:06 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.c scrub ok
Nov 24 13:26:07 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.11 deep-scrub starts
Nov 24 13:26:07 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.11 deep-scrub ok
Nov 24 13:26:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:26:08 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Nov 24 13:26:08 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Nov 24 13:26:08 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v308: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:08 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Nov 24 13:26:08 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Nov 24 13:26:09 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Nov 24 13:26:09 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Nov 24 13:26:09 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Nov 24 13:26:09 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Nov 24 13:26:10 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v309: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:10 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 24 13:26:10 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 24 13:26:11 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.b scrub starts
Nov 24 13:26:11 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.b scrub ok
Nov 24 13:26:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:26:12 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v310: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:12 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 24 13:26:12 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 24 13:26:12 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Nov 24 13:26:12 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Nov 24 13:26:13 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 24 13:26:13 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 24 13:26:13 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 24 13:26:13 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 24 13:26:14 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v311: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:14 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 24 13:26:14 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 24 13:26:15 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Nov 24 13:26:15 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Nov 24 13:26:15 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 24 13:26:15 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 24 13:26:16 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v312: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:16 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 24 13:26:16 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 24 13:26:17 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 24 13:26:17 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 24 13:26:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:26:17 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.7 deep-scrub starts
Nov 24 13:26:18 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.7 deep-scrub ok
Nov 24 13:26:18 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v313: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:18 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Nov 24 13:26:19 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Nov 24 13:26:19 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Nov 24 13:26:19 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Nov 24 13:26:20 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v314: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:21 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Nov 24 13:26:21 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Nov 24 13:26:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:26:22 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v315: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:22 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 24 13:26:22 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 24 13:26:22 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 24 13:26:23 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 24 13:26:23 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Nov 24 13:26:23 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Nov 24 13:26:24 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v316: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:24 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 24 13:26:24 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 24 13:26:25 np0005533938 systemd-logind[822]: New session 36 of user zuul.
Nov 24 13:26:25 np0005533938 systemd[1]: Started Session 36 of User zuul.
Nov 24 13:26:25 np0005533938 python3.9[116927]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 24 13:26:26 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v317: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:27 np0005533938 python3.9[117101]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:26:27 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1 deep-scrub starts
Nov 24 13:26:27 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1 deep-scrub ok
Nov 24 13:26:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:26:28 np0005533938 python3.9[117257]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:26:28 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v318: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:29 np0005533938 python3.9[117410]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:26:29 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:26:29 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:26:29 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:26:29 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:26:29 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:26:29 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:26:29 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev a5dfd6e2-0832-4238-9b21-533da6bb36f2 does not exist
Nov 24 13:26:29 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev bb2dacfa-fe32-4b79-a423-ffbd7cf6707d does not exist
Nov 24 13:26:29 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev a4e28797-d880-485b-9efa-5a7b1eb4417a does not exist
Nov 24 13:26:29 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:26:29 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:26:29 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:26:29 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:26:29 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:26:29 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:26:30 np0005533938 python3.9[117768]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:26:30 np0005533938 podman[117836]: 2025-11-24 18:26:30.257972155 +0000 UTC m=+0.036914107 container create d1382ffbd2a51c16b9b00d9dadc359272759d837983d94044b7470952cd3b22b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 13:26:30 np0005533938 systemd[1]: Started libpod-conmon-d1382ffbd2a51c16b9b00d9dadc359272759d837983d94044b7470952cd3b22b.scope.
Nov 24 13:26:30 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Nov 24 13:26:30 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:26:30 np0005533938 podman[117836]: 2025-11-24 18:26:30.330090365 +0000 UTC m=+0.109032367 container init d1382ffbd2a51c16b9b00d9dadc359272759d837983d94044b7470952cd3b22b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_roentgen, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 13:26:30 np0005533938 podman[117836]: 2025-11-24 18:26:30.240892586 +0000 UTC m=+0.019834598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:26:30 np0005533938 podman[117836]: 2025-11-24 18:26:30.339587688 +0000 UTC m=+0.118529640 container start d1382ffbd2a51c16b9b00d9dadc359272759d837983d94044b7470952cd3b22b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_roentgen, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 13:26:30 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Nov 24 13:26:30 np0005533938 podman[117836]: 2025-11-24 18:26:30.342585721 +0000 UTC m=+0.121527733 container attach d1382ffbd2a51c16b9b00d9dadc359272759d837983d94044b7470952cd3b22b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 13:26:30 np0005533938 blissful_roentgen[117875]: 167 167
Nov 24 13:26:30 np0005533938 systemd[1]: libpod-d1382ffbd2a51c16b9b00d9dadc359272759d837983d94044b7470952cd3b22b.scope: Deactivated successfully.
Nov 24 13:26:30 np0005533938 conmon[117875]: conmon d1382ffbd2a51c16b9b0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d1382ffbd2a51c16b9b00d9dadc359272759d837983d94044b7470952cd3b22b.scope/container/memory.events
Nov 24 13:26:30 np0005533938 podman[117836]: 2025-11-24 18:26:30.346027756 +0000 UTC m=+0.124969728 container died d1382ffbd2a51c16b9b00d9dadc359272759d837983d94044b7470952cd3b22b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_roentgen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:26:30 np0005533938 systemd[1]: var-lib-containers-storage-overlay-7fe662a18a6a66779f369905408f134981e708514c545afbd5aaff33ee97be3f-merged.mount: Deactivated successfully.
Nov 24 13:26:30 np0005533938 podman[117836]: 2025-11-24 18:26:30.400684267 +0000 UTC m=+0.179626219 container remove d1382ffbd2a51c16b9b00d9dadc359272759d837983d94044b7470952cd3b22b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 24 13:26:30 np0005533938 systemd[1]: libpod-conmon-d1382ffbd2a51c16b9b00d9dadc359272759d837983d94044b7470952cd3b22b.scope: Deactivated successfully.
Nov 24 13:26:30 np0005533938 podman[117975]: 2025-11-24 18:26:30.574879461 +0000 UTC m=+0.046522512 container create 41f45104d0dde4a117da85a9885002fa542a8562049a64e607a7d43359b6d7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wilbur, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 13:26:30 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v319: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:30 np0005533938 systemd[1]: Started libpod-conmon-41f45104d0dde4a117da85a9885002fa542a8562049a64e607a7d43359b6d7b6.scope.
Nov 24 13:26:30 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:26:30 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d28baaf2c7b31f3267068eba2086cfbe674bb2a0505b7e63f88ad75b4aa5d0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:26:30 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d28baaf2c7b31f3267068eba2086cfbe674bb2a0505b7e63f88ad75b4aa5d0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:26:30 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d28baaf2c7b31f3267068eba2086cfbe674bb2a0505b7e63f88ad75b4aa5d0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:26:30 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d28baaf2c7b31f3267068eba2086cfbe674bb2a0505b7e63f88ad75b4aa5d0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:26:30 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d28baaf2c7b31f3267068eba2086cfbe674bb2a0505b7e63f88ad75b4aa5d0b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:26:30 np0005533938 podman[117975]: 2025-11-24 18:26:30.559305889 +0000 UTC m=+0.030948960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:26:30 np0005533938 podman[117975]: 2025-11-24 18:26:30.658277118 +0000 UTC m=+0.129920189 container init 41f45104d0dde4a117da85a9885002fa542a8562049a64e607a7d43359b6d7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:26:30 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:26:30 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:26:30 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:26:30 np0005533938 podman[117975]: 2025-11-24 18:26:30.670091568 +0000 UTC m=+0.141734619 container start 41f45104d0dde4a117da85a9885002fa542a8562049a64e607a7d43359b6d7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wilbur, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 24 13:26:30 np0005533938 podman[117975]: 2025-11-24 18:26:30.673313557 +0000 UTC m=+0.144956628 container attach 41f45104d0dde4a117da85a9885002fa542a8562049a64e607a7d43359b6d7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 13:26:30 np0005533938 python3.9[118051]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:26:30 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.c deep-scrub starts
Nov 24 13:26:30 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.c deep-scrub ok
Nov 24 13:26:31 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 24 13:26:31 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 24 13:26:31 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.19 deep-scrub starts
Nov 24 13:26:31 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.19 deep-scrub ok
Nov 24 13:26:31 np0005533938 python3.9[118214]: ansible-ansible.builtin.service_facts Invoked
Nov 24 13:26:31 np0005533938 nostalgic_wilbur[118018]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:26:31 np0005533938 nostalgic_wilbur[118018]: --> relative data size: 1.0
Nov 24 13:26:31 np0005533938 nostalgic_wilbur[118018]: --> All data devices are unavailable
Nov 24 13:26:31 np0005533938 network[118242]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 13:26:31 np0005533938 network[118243]: 'network-scripts' will be removed from distribution in near future.
Nov 24 13:26:31 np0005533938 network[118244]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 13:26:31 np0005533938 systemd[1]: libpod-41f45104d0dde4a117da85a9885002fa542a8562049a64e607a7d43359b6d7b6.scope: Deactivated successfully.
Nov 24 13:26:31 np0005533938 podman[117975]: 2025-11-24 18:26:31.749419012 +0000 UTC m=+1.221062063 container died 41f45104d0dde4a117da85a9885002fa542a8562049a64e607a7d43359b6d7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wilbur, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:26:31 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Nov 24 13:26:31 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Nov 24 13:26:32 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Nov 24 13:26:32 np0005533938 systemd[1]: var-lib-containers-storage-overlay-6d28baaf2c7b31f3267068eba2086cfbe674bb2a0505b7e63f88ad75b4aa5d0b-merged.mount: Deactivated successfully.
Nov 24 13:26:32 np0005533938 ceph-osd[88544]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Nov 24 13:26:32 np0005533938 podman[117975]: 2025-11-24 18:26:32.412954942 +0000 UTC m=+1.884597993 container remove 41f45104d0dde4a117da85a9885002fa542a8562049a64e607a7d43359b6d7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wilbur, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 13:26:32 np0005533938 systemd[1]: libpod-conmon-41f45104d0dde4a117da85a9885002fa542a8562049a64e607a7d43359b6d7b6.scope: Deactivated successfully.
Nov 24 13:26:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:26:32 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v320: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:32 np0005533938 podman[118433]: 2025-11-24 18:26:32.94835505 +0000 UTC m=+0.038719991 container create cb81d8f4060bd95b807f6006092e531f95a9770d8890266e187847a40cc5449d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_benz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:26:32 np0005533938 systemd[1]: Started libpod-conmon-cb81d8f4060bd95b807f6006092e531f95a9770d8890266e187847a40cc5449d.scope.
Nov 24 13:26:33 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:26:33 np0005533938 podman[118433]: 2025-11-24 18:26:33.02049337 +0000 UTC m=+0.110858311 container init cb81d8f4060bd95b807f6006092e531f95a9770d8890266e187847a40cc5449d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_benz, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:26:33 np0005533938 podman[118433]: 2025-11-24 18:26:32.928425091 +0000 UTC m=+0.018790062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:26:33 np0005533938 podman[118433]: 2025-11-24 18:26:33.029205694 +0000 UTC m=+0.119570635 container start cb81d8f4060bd95b807f6006092e531f95a9770d8890266e187847a40cc5449d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:26:33 np0005533938 podman[118433]: 2025-11-24 18:26:33.032386832 +0000 UTC m=+0.122751773 container attach cb81d8f4060bd95b807f6006092e531f95a9770d8890266e187847a40cc5449d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_benz, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 13:26:33 np0005533938 heuristic_benz[118453]: 167 167
Nov 24 13:26:33 np0005533938 systemd[1]: libpod-cb81d8f4060bd95b807f6006092e531f95a9770d8890266e187847a40cc5449d.scope: Deactivated successfully.
Nov 24 13:26:33 np0005533938 conmon[118453]: conmon cb81d8f4060bd95b807f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cb81d8f4060bd95b807f6006092e531f95a9770d8890266e187847a40cc5449d.scope/container/memory.events
Nov 24 13:26:33 np0005533938 podman[118433]: 2025-11-24 18:26:33.035799026 +0000 UTC m=+0.126163967 container died cb81d8f4060bd95b807f6006092e531f95a9770d8890266e187847a40cc5449d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 13:26:33 np0005533938 systemd[1]: var-lib-containers-storage-overlay-f611dedd19e3f730f5612dd837fb7187c1fa4b39b90fc05b36b4508ad046f1da-merged.mount: Deactivated successfully.
Nov 24 13:26:33 np0005533938 podman[118433]: 2025-11-24 18:26:33.080738338 +0000 UTC m=+0.171103269 container remove cb81d8f4060bd95b807f6006092e531f95a9770d8890266e187847a40cc5449d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 13:26:33 np0005533938 systemd[1]: libpod-conmon-cb81d8f4060bd95b807f6006092e531f95a9770d8890266e187847a40cc5449d.scope: Deactivated successfully.
Nov 24 13:26:33 np0005533938 podman[118486]: 2025-11-24 18:26:33.236180412 +0000 UTC m=+0.040941775 container create 55d6f1751a43ae7a4e2f5fa27c9b10ae1487ce40ed7db967b5cadd2f3198cdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wilbur, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:26:33 np0005533938 systemd[1]: Started libpod-conmon-55d6f1751a43ae7a4e2f5fa27c9b10ae1487ce40ed7db967b5cadd2f3198cdaf.scope.
Nov 24 13:26:33 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:26:33 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed966ea4da2d7a6c7a9c4ab0602377c3b1de71a479fabfa20c2cca3e5d9cb9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:26:33 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed966ea4da2d7a6c7a9c4ab0602377c3b1de71a479fabfa20c2cca3e5d9cb9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:26:33 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed966ea4da2d7a6c7a9c4ab0602377c3b1de71a479fabfa20c2cca3e5d9cb9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:26:33 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed966ea4da2d7a6c7a9c4ab0602377c3b1de71a479fabfa20c2cca3e5d9cb9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:26:33 np0005533938 podman[118486]: 2025-11-24 18:26:33.218822916 +0000 UTC m=+0.023584319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:26:33 np0005533938 podman[118486]: 2025-11-24 18:26:33.320418589 +0000 UTC m=+0.125179972 container init 55d6f1751a43ae7a4e2f5fa27c9b10ae1487ce40ed7db967b5cadd2f3198cdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:26:33 np0005533938 podman[118486]: 2025-11-24 18:26:33.327895463 +0000 UTC m=+0.132656826 container start 55d6f1751a43ae7a4e2f5fa27c9b10ae1487ce40ed7db967b5cadd2f3198cdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 13:26:33 np0005533938 podman[118486]: 2025-11-24 18:26:33.330564088 +0000 UTC m=+0.135325471 container attach 55d6f1751a43ae7a4e2f5fa27c9b10ae1487ce40ed7db967b5cadd2f3198cdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 13:26:33 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 24 13:26:33 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]: {
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:    "0": [
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:        {
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "devices": [
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "/dev/loop3"
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            ],
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "lv_name": "ceph_lv0",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "lv_size": "21470642176",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "name": "ceph_lv0",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "tags": {
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.cluster_name": "ceph",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.crush_device_class": "",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.encrypted": "0",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.osd_id": "0",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.type": "block",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.vdo": "0"
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            },
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "type": "block",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "vg_name": "ceph_vg0"
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:        }
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:    ],
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:    "1": [
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:        {
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "devices": [
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "/dev/loop4"
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            ],
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "lv_name": "ceph_lv1",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "lv_size": "21470642176",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "name": "ceph_lv1",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "tags": {
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.cluster_name": "ceph",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.crush_device_class": "",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.encrypted": "0",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.osd_id": "1",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.type": "block",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.vdo": "0"
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            },
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "type": "block",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "vg_name": "ceph_vg1"
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:        }
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:    ],
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:    "2": [
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:        {
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "devices": [
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "/dev/loop5"
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            ],
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "lv_name": "ceph_lv2",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "lv_size": "21470642176",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "name": "ceph_lv2",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "tags": {
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.cluster_name": "ceph",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.crush_device_class": "",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.encrypted": "0",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.osd_id": "2",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.type": "block",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:                "ceph.vdo": "0"
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            },
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "type": "block",
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:            "vg_name": "ceph_vg2"
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:        }
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]:    ]
Nov 24 13:26:34 np0005533938 modest_wilbur[118506]: }
Nov 24 13:26:34 np0005533938 systemd[1]: libpod-55d6f1751a43ae7a4e2f5fa27c9b10ae1487ce40ed7db967b5cadd2f3198cdaf.scope: Deactivated successfully.
Nov 24 13:26:34 np0005533938 podman[118523]: 2025-11-24 18:26:34.093053968 +0000 UTC m=+0.028681435 container died 55d6f1751a43ae7a4e2f5fa27c9b10ae1487ce40ed7db967b5cadd2f3198cdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wilbur, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:26:34 np0005533938 systemd[1]: var-lib-containers-storage-overlay-9ed966ea4da2d7a6c7a9c4ab0602377c3b1de71a479fabfa20c2cca3e5d9cb9b-merged.mount: Deactivated successfully.
Nov 24 13:26:34 np0005533938 podman[118523]: 2025-11-24 18:26:34.154000404 +0000 UTC m=+0.089627841 container remove 55d6f1751a43ae7a4e2f5fa27c9b10ae1487ce40ed7db967b5cadd2f3198cdaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wilbur, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 13:26:34 np0005533938 systemd[1]: libpod-conmon-55d6f1751a43ae7a4e2f5fa27c9b10ae1487ce40ed7db967b5cadd2f3198cdaf.scope: Deactivated successfully.
Nov 24 13:26:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:26:34
Nov 24 13:26:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:26:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:26:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'backups', 'vms', '.rgw.root', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'default.rgw.log']
Nov 24 13:26:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:26:34 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 24 13:26:34 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v321: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:34 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 24 13:26:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:26:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:26:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:26:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:26:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:26:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:26:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:26:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:26:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:26:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:26:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:26:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:26:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:26:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:26:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:26:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:26:34 np0005533938 podman[118699]: 2025-11-24 18:26:34.807110409 +0000 UTC m=+0.045855606 container create 3160e29c86c4fdd87ecea913fb9f162aaa3e9d8643733f5f21711e661438bbdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:26:34 np0005533938 systemd[1]: Started libpod-conmon-3160e29c86c4fdd87ecea913fb9f162aaa3e9d8643733f5f21711e661438bbdd.scope.
Nov 24 13:26:34 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:26:34 np0005533938 podman[118699]: 2025-11-24 18:26:34.872592406 +0000 UTC m=+0.111337613 container init 3160e29c86c4fdd87ecea913fb9f162aaa3e9d8643733f5f21711e661438bbdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:26:34 np0005533938 podman[118699]: 2025-11-24 18:26:34.78349019 +0000 UTC m=+0.022235427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:26:34 np0005533938 podman[118699]: 2025-11-24 18:26:34.881050013 +0000 UTC m=+0.119795200 container start 3160e29c86c4fdd87ecea913fb9f162aaa3e9d8643733f5f21711e661438bbdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:26:34 np0005533938 condescending_ishizaka[118720]: 167 167
Nov 24 13:26:34 np0005533938 systemd[1]: libpod-3160e29c86c4fdd87ecea913fb9f162aaa3e9d8643733f5f21711e661438bbdd.scope: Deactivated successfully.
Nov 24 13:26:34 np0005533938 podman[118699]: 2025-11-24 18:26:34.896752879 +0000 UTC m=+0.135498086 container attach 3160e29c86c4fdd87ecea913fb9f162aaa3e9d8643733f5f21711e661438bbdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ishizaka, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:26:34 np0005533938 podman[118699]: 2025-11-24 18:26:34.897743313 +0000 UTC m=+0.136488520 container died 3160e29c86c4fdd87ecea913fb9f162aaa3e9d8643733f5f21711e661438bbdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:26:34 np0005533938 systemd[1]: var-lib-containers-storage-overlay-8a2ede3ee68537e0ffb109bb8623a322f3775189baf07d8bed53f43406422ea1-merged.mount: Deactivated successfully.
Nov 24 13:26:34 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Nov 24 13:26:34 np0005533938 podman[118699]: 2025-11-24 18:26:34.93064547 +0000 UTC m=+0.169390657 container remove 3160e29c86c4fdd87ecea913fb9f162aaa3e9d8643733f5f21711e661438bbdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 24 13:26:34 np0005533938 systemd[1]: libpod-conmon-3160e29c86c4fdd87ecea913fb9f162aaa3e9d8643733f5f21711e661438bbdd.scope: Deactivated successfully.
Nov 24 13:26:34 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Nov 24 13:26:35 np0005533938 podman[118753]: 2025-11-24 18:26:35.0809875 +0000 UTC m=+0.049677940 container create 5746f6c502a6640b28fd557d5709d03fc5c98185a3bb6bd9dc01e42e752ba20f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_napier, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 13:26:35 np0005533938 systemd[1]: Started libpod-conmon-5746f6c502a6640b28fd557d5709d03fc5c98185a3bb6bd9dc01e42e752ba20f.scope.
Nov 24 13:26:35 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:26:35 np0005533938 podman[118753]: 2025-11-24 18:26:35.053471874 +0000 UTC m=+0.022162324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:26:35 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06e0c55129496b2dc4652db99bb6d8f18e2494d395bfef6c2120933ccea58b8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:26:35 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06e0c55129496b2dc4652db99bb6d8f18e2494d395bfef6c2120933ccea58b8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:26:35 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06e0c55129496b2dc4652db99bb6d8f18e2494d395bfef6c2120933ccea58b8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:26:35 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06e0c55129496b2dc4652db99bb6d8f18e2494d395bfef6c2120933ccea58b8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:26:35 np0005533938 podman[118753]: 2025-11-24 18:26:35.172539966 +0000 UTC m=+0.141230406 container init 5746f6c502a6640b28fd557d5709d03fc5c98185a3bb6bd9dc01e42e752ba20f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_napier, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 13:26:35 np0005533938 podman[118753]: 2025-11-24 18:26:35.179114957 +0000 UTC m=+0.147805387 container start 5746f6c502a6640b28fd557d5709d03fc5c98185a3bb6bd9dc01e42e752ba20f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:26:35 np0005533938 podman[118753]: 2025-11-24 18:26:35.182339906 +0000 UTC m=+0.151030366 container attach 5746f6c502a6640b28fd557d5709d03fc5c98185a3bb6bd9dc01e42e752ba20f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:26:36 np0005533938 competent_napier[118776]: {
Nov 24 13:26:36 np0005533938 competent_napier[118776]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:26:36 np0005533938 competent_napier[118776]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:26:36 np0005533938 competent_napier[118776]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:26:36 np0005533938 competent_napier[118776]:        "osd_id": 0,
Nov 24 13:26:36 np0005533938 competent_napier[118776]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:26:36 np0005533938 competent_napier[118776]:        "type": "bluestore"
Nov 24 13:26:36 np0005533938 competent_napier[118776]:    },
Nov 24 13:26:36 np0005533938 competent_napier[118776]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:26:36 np0005533938 competent_napier[118776]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:26:36 np0005533938 competent_napier[118776]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:26:36 np0005533938 competent_napier[118776]:        "osd_id": 1,
Nov 24 13:26:36 np0005533938 competent_napier[118776]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:26:36 np0005533938 competent_napier[118776]:        "type": "bluestore"
Nov 24 13:26:36 np0005533938 competent_napier[118776]:    },
Nov 24 13:26:36 np0005533938 competent_napier[118776]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:26:36 np0005533938 competent_napier[118776]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:26:36 np0005533938 competent_napier[118776]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:26:36 np0005533938 competent_napier[118776]:        "osd_id": 2,
Nov 24 13:26:36 np0005533938 competent_napier[118776]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:26:36 np0005533938 competent_napier[118776]:        "type": "bluestore"
Nov 24 13:26:36 np0005533938 competent_napier[118776]:    }
Nov 24 13:26:36 np0005533938 competent_napier[118776]: }
Nov 24 13:26:36 np0005533938 python3.9[118948]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:26:36 np0005533938 systemd[1]: libpod-5746f6c502a6640b28fd557d5709d03fc5c98185a3bb6bd9dc01e42e752ba20f.scope: Deactivated successfully.
Nov 24 13:26:36 np0005533938 podman[118753]: 2025-11-24 18:26:36.085850265 +0000 UTC m=+1.054540695 container died 5746f6c502a6640b28fd557d5709d03fc5c98185a3bb6bd9dc01e42e752ba20f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:26:36 np0005533938 systemd[1]: var-lib-containers-storage-overlay-06e0c55129496b2dc4652db99bb6d8f18e2494d395bfef6c2120933ccea58b8e-merged.mount: Deactivated successfully.
Nov 24 13:26:36 np0005533938 podman[118753]: 2025-11-24 18:26:36.142075375 +0000 UTC m=+1.110765815 container remove 5746f6c502a6640b28fd557d5709d03fc5c98185a3bb6bd9dc01e42e752ba20f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 13:26:36 np0005533938 systemd[1]: libpod-conmon-5746f6c502a6640b28fd557d5709d03fc5c98185a3bb6bd9dc01e42e752ba20f.scope: Deactivated successfully.
Nov 24 13:26:36 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:26:36 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:26:36 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:26:36 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:26:36 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 8c02dc4a-4c03-447e-8e35-d39b3918de36 does not exist
Nov 24 13:26:36 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev d304f9b0-362a-4392-952b-b4f747cd8db1 does not exist
Nov 24 13:26:36 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Nov 24 13:26:36 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Nov 24 13:26:36 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v322: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:36 np0005533938 python3.9[119177]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:26:37 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:26:37 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:26:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:26:37 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 24 13:26:37 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 24 13:26:37 np0005533938 python3.9[119331]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:26:38 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v323: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:38 np0005533938 python3.9[119489]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 13:26:39 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 24 13:26:39 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 24 13:26:39 np0005533938 python3.9[119573]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:26:40 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Nov 24 13:26:40 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v324: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:40 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Nov 24 13:26:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:26:42 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v325: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:42 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:26:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:26:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:26:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:26:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:26:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:26:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:26:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:26:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:26:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:26:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:26:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:26:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:26:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:26:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:26:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:26:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:26:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:26:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:26:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:26:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:26:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:26:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:26:44 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v326: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:46 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v327: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:26:48 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v328: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:49 np0005533938 python3.9[119770]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:26:49 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.1f deep-scrub starts
Nov 24 13:26:49 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.1f deep-scrub ok
Nov 24 13:26:50 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v329: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:50 np0005533938 python3.9[120057]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 24 13:26:51 np0005533938 python3.9[120209]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 24 13:26:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:26:52 np0005533938 python3.9[120361]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:26:52 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v330: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:53 np0005533938 python3.9[120513]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 24 13:26:54 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v331: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:54 np0005533938 python3.9[120665]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:26:55 np0005533938 python3.9[120817]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:26:55 np0005533938 python3.9[120895]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:26:56 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v332: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:56 np0005533938 python3.9[121047]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:26:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:26:57 np0005533938 python3.9[121201]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 24 13:26:58 np0005533938 python3.9[121354]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 24 13:26:58 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v333: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:26:59 np0005533938 python3.9[121507]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 13:27:00 np0005533938 python3.9[121659]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 24 13:27:00 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v334: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:00 np0005533938 python3.9[121811]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:27:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:27:02 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v335: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:04 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v336: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:27:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:27:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:27:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:27:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:27:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:27:06 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v337: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:27:08 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v338: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:09 np0005533938 systemd[1]: session-36.scope: Deactivated successfully.
Nov 24 13:27:09 np0005533938 systemd-logind[822]: Session 36 logged out. Waiting for processes to exit.
Nov 24 13:27:09 np0005533938 systemd[1]: session-36.scope: Consumed 18.639s CPU time.
Nov 24 13:27:09 np0005533938 systemd-logind[822]: Removed session 36.
Nov 24 13:27:10 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v339: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:27:12 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v340: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:14 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v341: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:16 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v342: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:27:18 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v343: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:20 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v344: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:27:22 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v345: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:24 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v346: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:26 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v347: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:27:28 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v348: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:30 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v349: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:27:32 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v350: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:27:34
Nov 24 13:27:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:27:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:27:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'images', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'backups', 'volumes']
Nov 24 13:27:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:27:34 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v351: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:27:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:27:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:27:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:27:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:27:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:27:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:27:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:27:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:27:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:27:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:27:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:27:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:27:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:27:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:27:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:27:36 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:36 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:27:36 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:27:36 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:27:36 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:27:36 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:27:36 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:27:36 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 41f3dbc4-e30e-4de5-91c0-953204ffc876 does not exist
Nov 24 13:27:36 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev b0b3eb31-d179-443a-9146-74c2c61a02af does not exist
Nov 24 13:27:36 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev fffc7eaf-46e8-4cc8-94e3-ec5ca5ea6a7a does not exist
Nov 24 13:27:36 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:27:36 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:27:36 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:27:36 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:27:36 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:27:36 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:27:37 np0005533938 podman[122152]: 2025-11-24 18:27:37.495720437 +0000 UTC m=+0.036953129 container create 322b3c2e8053ce9bfec43da6c7c033572d384cb5164212706eb0a29621128a5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 13:27:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:27:37 np0005533938 systemd[1]: Started libpod-conmon-322b3c2e8053ce9bfec43da6c7c033572d384cb5164212706eb0a29621128a5c.scope.
Nov 24 13:27:37 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:27:37 np0005533938 podman[122152]: 2025-11-24 18:27:37.570636646 +0000 UTC m=+0.111869358 container init 322b3c2e8053ce9bfec43da6c7c033572d384cb5164212706eb0a29621128a5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goodall, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 13:27:37 np0005533938 podman[122152]: 2025-11-24 18:27:37.478702418 +0000 UTC m=+0.019935140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:27:37 np0005533938 podman[122152]: 2025-11-24 18:27:37.576928367 +0000 UTC m=+0.118161059 container start 322b3c2e8053ce9bfec43da6c7c033572d384cb5164212706eb0a29621128a5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goodall, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 13:27:37 np0005533938 podman[122152]: 2025-11-24 18:27:37.579924069 +0000 UTC m=+0.121156761 container attach 322b3c2e8053ce9bfec43da6c7c033572d384cb5164212706eb0a29621128a5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 13:27:37 np0005533938 compassionate_goodall[122168]: 167 167
Nov 24 13:27:37 np0005533938 systemd[1]: libpod-322b3c2e8053ce9bfec43da6c7c033572d384cb5164212706eb0a29621128a5c.scope: Deactivated successfully.
Nov 24 13:27:37 np0005533938 podman[122152]: 2025-11-24 18:27:37.582394398 +0000 UTC m=+0.123627110 container died 322b3c2e8053ce9bfec43da6c7c033572d384cb5164212706eb0a29621128a5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goodall, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 13:27:37 np0005533938 systemd[1]: var-lib-containers-storage-overlay-feaae527d37bb871598a7d5891c164da860c92253bc14309691f1d3e29c0032d-merged.mount: Deactivated successfully.
Nov 24 13:27:37 np0005533938 podman[122152]: 2025-11-24 18:27:37.627060241 +0000 UTC m=+0.168292933 container remove 322b3c2e8053ce9bfec43da6c7c033572d384cb5164212706eb0a29621128a5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Nov 24 13:27:37 np0005533938 systemd[1]: libpod-conmon-322b3c2e8053ce9bfec43da6c7c033572d384cb5164212706eb0a29621128a5c.scope: Deactivated successfully.
Nov 24 13:27:37 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:27:37 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:27:37 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:27:37 np0005533938 podman[122192]: 2025-11-24 18:27:37.775841714 +0000 UTC m=+0.034378797 container create 61378a05612e3ae26ced0b36323e65506d66e431f797a557c9cd3604c0d2a0f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bardeen, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:27:37 np0005533938 systemd[1]: Started libpod-conmon-61378a05612e3ae26ced0b36323e65506d66e431f797a557c9cd3604c0d2a0f4.scope.
Nov 24 13:27:37 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:27:37 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9739673812d01ea11581e1947bfa6f138ceed6499d810b4f57927f47c3e0b6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:27:37 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9739673812d01ea11581e1947bfa6f138ceed6499d810b4f57927f47c3e0b6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:27:37 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9739673812d01ea11581e1947bfa6f138ceed6499d810b4f57927f47c3e0b6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:27:37 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9739673812d01ea11581e1947bfa6f138ceed6499d810b4f57927f47c3e0b6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:27:37 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9739673812d01ea11581e1947bfa6f138ceed6499d810b4f57927f47c3e0b6b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:27:37 np0005533938 podman[122192]: 2025-11-24 18:27:37.843070578 +0000 UTC m=+0.101607691 container init 61378a05612e3ae26ced0b36323e65506d66e431f797a557c9cd3604c0d2a0f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bardeen, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 13:27:37 np0005533938 podman[122192]: 2025-11-24 18:27:37.853891748 +0000 UTC m=+0.112428881 container start 61378a05612e3ae26ced0b36323e65506d66e431f797a557c9cd3604c0d2a0f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bardeen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 13:27:37 np0005533938 podman[122192]: 2025-11-24 18:27:37.76108898 +0000 UTC m=+0.019626093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:27:37 np0005533938 podman[122192]: 2025-11-24 18:27:37.858365645 +0000 UTC m=+0.116902748 container attach 61378a05612e3ae26ced0b36323e65506d66e431f797a557c9cd3604c0d2a0f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bardeen, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:27:38 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:38 np0005533938 lucid_bardeen[122209]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:27:38 np0005533938 lucid_bardeen[122209]: --> relative data size: 1.0
Nov 24 13:27:38 np0005533938 lucid_bardeen[122209]: --> All data devices are unavailable
Nov 24 13:27:38 np0005533938 systemd[1]: libpod-61378a05612e3ae26ced0b36323e65506d66e431f797a557c9cd3604c0d2a0f4.scope: Deactivated successfully.
Nov 24 13:27:38 np0005533938 podman[122192]: 2025-11-24 18:27:38.86511708 +0000 UTC m=+1.123654163 container died 61378a05612e3ae26ced0b36323e65506d66e431f797a557c9cd3604c0d2a0f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 13:27:38 np0005533938 systemd[1]: var-lib-containers-storage-overlay-c9739673812d01ea11581e1947bfa6f138ceed6499d810b4f57927f47c3e0b6b-merged.mount: Deactivated successfully.
Nov 24 13:27:38 np0005533938 podman[122192]: 2025-11-24 18:27:38.907311034 +0000 UTC m=+1.165848117 container remove 61378a05612e3ae26ced0b36323e65506d66e431f797a557c9cd3604c0d2a0f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:27:38 np0005533938 systemd[1]: libpod-conmon-61378a05612e3ae26ced0b36323e65506d66e431f797a557c9cd3604c0d2a0f4.scope: Deactivated successfully.
Nov 24 13:27:39 np0005533938 podman[122389]: 2025-11-24 18:27:39.462491326 +0000 UTC m=+0.048339502 container create 82c4d478962ebfe6f55a8956f59b685a3aa88c0b622061cea1d0008e3ecea5f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:27:39 np0005533938 systemd[1]: Started libpod-conmon-82c4d478962ebfe6f55a8956f59b685a3aa88c0b622061cea1d0008e3ecea5f0.scope.
Nov 24 13:27:39 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:27:39 np0005533938 podman[122389]: 2025-11-24 18:27:39.447363592 +0000 UTC m=+0.033211778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:27:39 np0005533938 podman[122389]: 2025-11-24 18:27:39.552424205 +0000 UTC m=+0.138272391 container init 82c4d478962ebfe6f55a8956f59b685a3aa88c0b622061cea1d0008e3ecea5f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 13:27:39 np0005533938 podman[122389]: 2025-11-24 18:27:39.563350748 +0000 UTC m=+0.149198914 container start 82c4d478962ebfe6f55a8956f59b685a3aa88c0b622061cea1d0008e3ecea5f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mendeleev, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:27:39 np0005533938 hopeful_mendeleev[122405]: 167 167
Nov 24 13:27:39 np0005533938 podman[122389]: 2025-11-24 18:27:39.566513964 +0000 UTC m=+0.152362150 container attach 82c4d478962ebfe6f55a8956f59b685a3aa88c0b622061cea1d0008e3ecea5f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mendeleev, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 13:27:39 np0005533938 systemd[1]: libpod-82c4d478962ebfe6f55a8956f59b685a3aa88c0b622061cea1d0008e3ecea5f0.scope: Deactivated successfully.
Nov 24 13:27:39 np0005533938 podman[122410]: 2025-11-24 18:27:39.609639229 +0000 UTC m=+0.026901057 container died 82c4d478962ebfe6f55a8956f59b685a3aa88c0b622061cea1d0008e3ecea5f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:27:39 np0005533938 systemd[1]: var-lib-containers-storage-overlay-78f3406e890c5f019b5513483913f43ddfff92320aedfbbc3dd1190559d00967-merged.mount: Deactivated successfully.
Nov 24 13:27:39 np0005533938 podman[122410]: 2025-11-24 18:27:39.641593527 +0000 UTC m=+0.058855355 container remove 82c4d478962ebfe6f55a8956f59b685a3aa88c0b622061cea1d0008e3ecea5f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:27:39 np0005533938 systemd[1]: libpod-conmon-82c4d478962ebfe6f55a8956f59b685a3aa88c0b622061cea1d0008e3ecea5f0.scope: Deactivated successfully.
Nov 24 13:27:39 np0005533938 podman[122432]: 2025-11-24 18:27:39.801963168 +0000 UTC m=+0.040466583 container create ab1c46ea3202c9a1e9ba66794b9caa37e4c9d9a956945791b506e5195f2a11ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:27:39 np0005533938 systemd[1]: Started libpod-conmon-ab1c46ea3202c9a1e9ba66794b9caa37e4c9d9a956945791b506e5195f2a11ba.scope.
Nov 24 13:27:39 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:27:39 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf8066b3cfea7b8144b2b2a0e041ba4a0cd968ea9c9ffee4736fb62ae198b37f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:27:39 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf8066b3cfea7b8144b2b2a0e041ba4a0cd968ea9c9ffee4736fb62ae198b37f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:27:39 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf8066b3cfea7b8144b2b2a0e041ba4a0cd968ea9c9ffee4736fb62ae198b37f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:27:39 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf8066b3cfea7b8144b2b2a0e041ba4a0cd968ea9c9ffee4736fb62ae198b37f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:27:39 np0005533938 podman[122432]: 2025-11-24 18:27:39.865725349 +0000 UTC m=+0.104228754 container init ab1c46ea3202c9a1e9ba66794b9caa37e4c9d9a956945791b506e5195f2a11ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 13:27:39 np0005533938 podman[122432]: 2025-11-24 18:27:39.874728655 +0000 UTC m=+0.113232050 container start ab1c46ea3202c9a1e9ba66794b9caa37e4c9d9a956945791b506e5195f2a11ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 13:27:39 np0005533938 podman[122432]: 2025-11-24 18:27:39.877260546 +0000 UTC m=+0.115763941 container attach ab1c46ea3202c9a1e9ba66794b9caa37e4c9d9a956945791b506e5195f2a11ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:27:39 np0005533938 podman[122432]: 2025-11-24 18:27:39.783258709 +0000 UTC m=+0.021762154 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:27:40 np0005533938 strange_lalande[122448]: {
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:    "0": [
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:        {
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "devices": [
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "/dev/loop3"
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            ],
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "lv_name": "ceph_lv0",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "lv_size": "21470642176",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "name": "ceph_lv0",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "tags": {
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.cluster_name": "ceph",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.crush_device_class": "",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.encrypted": "0",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.osd_id": "0",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.type": "block",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.vdo": "0"
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            },
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "type": "block",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "vg_name": "ceph_vg0"
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:        }
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:    ],
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:    "1": [
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:        {
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "devices": [
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "/dev/loop4"
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            ],
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "lv_name": "ceph_lv1",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "lv_size": "21470642176",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "name": "ceph_lv1",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "tags": {
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.cluster_name": "ceph",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.crush_device_class": "",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.encrypted": "0",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.osd_id": "1",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.type": "block",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.vdo": "0"
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            },
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "type": "block",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "vg_name": "ceph_vg1"
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:        }
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:    ],
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:    "2": [
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:        {
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "devices": [
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "/dev/loop5"
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            ],
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "lv_name": "ceph_lv2",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "lv_size": "21470642176",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "name": "ceph_lv2",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "tags": {
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.cluster_name": "ceph",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.crush_device_class": "",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.encrypted": "0",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.osd_id": "2",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.type": "block",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:                "ceph.vdo": "0"
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            },
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "type": "block",
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:            "vg_name": "ceph_vg2"
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:        }
Nov 24 13:27:40 np0005533938 strange_lalande[122448]:    ]
Nov 24 13:27:40 np0005533938 strange_lalande[122448]: }
Nov 24 13:27:40 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:40 np0005533938 systemd[1]: libpod-ab1c46ea3202c9a1e9ba66794b9caa37e4c9d9a956945791b506e5195f2a11ba.scope: Deactivated successfully.
Nov 24 13:27:40 np0005533938 conmon[122448]: conmon ab1c46ea3202c9a1e9ba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ab1c46ea3202c9a1e9ba66794b9caa37e4c9d9a956945791b506e5195f2a11ba.scope/container/memory.events
Nov 24 13:27:40 np0005533938 podman[122457]: 2025-11-24 18:27:40.653755991 +0000 UTC m=+0.019620932 container died ab1c46ea3202c9a1e9ba66794b9caa37e4c9d9a956945791b506e5195f2a11ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_lalande, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 13:27:40 np0005533938 systemd[1]: var-lib-containers-storage-overlay-cf8066b3cfea7b8144b2b2a0e041ba4a0cd968ea9c9ffee4736fb62ae198b37f-merged.mount: Deactivated successfully.
Nov 24 13:27:40 np0005533938 podman[122457]: 2025-11-24 18:27:40.703970577 +0000 UTC m=+0.069835518 container remove ab1c46ea3202c9a1e9ba66794b9caa37e4c9d9a956945791b506e5195f2a11ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_lalande, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:27:40 np0005533938 systemd[1]: libpod-conmon-ab1c46ea3202c9a1e9ba66794b9caa37e4c9d9a956945791b506e5195f2a11ba.scope: Deactivated successfully.
Nov 24 13:27:41 np0005533938 podman[122612]: 2025-11-24 18:27:41.270808979 +0000 UTC m=+0.037467171 container create 2246bab8880687ad8dd6ca39cb2cc303ffe3fbf0635cd99e75f70b129bef42da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 13:27:41 np0005533938 systemd[1]: Started libpod-conmon-2246bab8880687ad8dd6ca39cb2cc303ffe3fbf0635cd99e75f70b129bef42da.scope.
Nov 24 13:27:41 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:27:41 np0005533938 podman[122612]: 2025-11-24 18:27:41.254746393 +0000 UTC m=+0.021404635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:27:41 np0005533938 podman[122612]: 2025-11-24 18:27:41.35953632 +0000 UTC m=+0.126194532 container init 2246bab8880687ad8dd6ca39cb2cc303ffe3fbf0635cd99e75f70b129bef42da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_rubin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:27:41 np0005533938 podman[122612]: 2025-11-24 18:27:41.370414441 +0000 UTC m=+0.137072663 container start 2246bab8880687ad8dd6ca39cb2cc303ffe3fbf0635cd99e75f70b129bef42da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_rubin, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:27:41 np0005533938 admiring_rubin[122628]: 167 167
Nov 24 13:27:41 np0005533938 systemd[1]: libpod-2246bab8880687ad8dd6ca39cb2cc303ffe3fbf0635cd99e75f70b129bef42da.scope: Deactivated successfully.
Nov 24 13:27:41 np0005533938 podman[122612]: 2025-11-24 18:27:41.374341425 +0000 UTC m=+0.140999627 container attach 2246bab8880687ad8dd6ca39cb2cc303ffe3fbf0635cd99e75f70b129bef42da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_rubin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:27:41 np0005533938 podman[122612]: 2025-11-24 18:27:41.375972624 +0000 UTC m=+0.142630826 container died 2246bab8880687ad8dd6ca39cb2cc303ffe3fbf0635cd99e75f70b129bef42da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_rubin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 24 13:27:41 np0005533938 systemd[1]: var-lib-containers-storage-overlay-b7dcdf0c867c734e19396294560753826024801f34105026d9275ff80e462e40-merged.mount: Deactivated successfully.
Nov 24 13:27:41 np0005533938 podman[122612]: 2025-11-24 18:27:41.41034252 +0000 UTC m=+0.177000722 container remove 2246bab8880687ad8dd6ca39cb2cc303ffe3fbf0635cd99e75f70b129bef42da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 13:27:41 np0005533938 systemd[1]: libpod-conmon-2246bab8880687ad8dd6ca39cb2cc303ffe3fbf0635cd99e75f70b129bef42da.scope: Deactivated successfully.
Nov 24 13:27:41 np0005533938 podman[122652]: 2025-11-24 18:27:41.556239903 +0000 UTC m=+0.035136084 container create 0dd2155a4ead81b2acd30a79be157dc921ca8d243786271b16e5de6d939fac9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kilby, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 13:27:41 np0005533938 systemd[1]: Started libpod-conmon-0dd2155a4ead81b2acd30a79be157dc921ca8d243786271b16e5de6d939fac9d.scope.
Nov 24 13:27:41 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:27:41 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6db3010aeab685276e3a88d8f6afb3d42c807fe5627954ae0cbde02c58b5189/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:27:41 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6db3010aeab685276e3a88d8f6afb3d42c807fe5627954ae0cbde02c58b5189/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:27:41 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6db3010aeab685276e3a88d8f6afb3d42c807fe5627954ae0cbde02c58b5189/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:27:41 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6db3010aeab685276e3a88d8f6afb3d42c807fe5627954ae0cbde02c58b5189/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:27:41 np0005533938 podman[122652]: 2025-11-24 18:27:41.540738071 +0000 UTC m=+0.019634282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:27:41 np0005533938 podman[122652]: 2025-11-24 18:27:41.644552324 +0000 UTC m=+0.123448515 container init 0dd2155a4ead81b2acd30a79be157dc921ca8d243786271b16e5de6d939fac9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 24 13:27:41 np0005533938 podman[122652]: 2025-11-24 18:27:41.649847341 +0000 UTC m=+0.128743522 container start 0dd2155a4ead81b2acd30a79be157dc921ca8d243786271b16e5de6d939fac9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:27:41 np0005533938 podman[122652]: 2025-11-24 18:27:41.656275185 +0000 UTC m=+0.135171386 container attach 0dd2155a4ead81b2acd30a79be157dc921ca8d243786271b16e5de6d939fac9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:27:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:27:42 np0005533938 elegant_kilby[122669]: {
Nov 24 13:27:42 np0005533938 elegant_kilby[122669]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:27:42 np0005533938 elegant_kilby[122669]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:27:42 np0005533938 elegant_kilby[122669]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:27:42 np0005533938 elegant_kilby[122669]:        "osd_id": 0,
Nov 24 13:27:42 np0005533938 elegant_kilby[122669]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:27:42 np0005533938 elegant_kilby[122669]:        "type": "bluestore"
Nov 24 13:27:42 np0005533938 elegant_kilby[122669]:    },
Nov 24 13:27:42 np0005533938 elegant_kilby[122669]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:27:42 np0005533938 elegant_kilby[122669]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:27:42 np0005533938 elegant_kilby[122669]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:27:42 np0005533938 elegant_kilby[122669]:        "osd_id": 1,
Nov 24 13:27:42 np0005533938 elegant_kilby[122669]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:27:42 np0005533938 elegant_kilby[122669]:        "type": "bluestore"
Nov 24 13:27:42 np0005533938 elegant_kilby[122669]:    },
Nov 24 13:27:42 np0005533938 elegant_kilby[122669]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:27:42 np0005533938 elegant_kilby[122669]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:27:42 np0005533938 elegant_kilby[122669]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:27:42 np0005533938 elegant_kilby[122669]:        "osd_id": 2,
Nov 24 13:27:42 np0005533938 elegant_kilby[122669]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:27:42 np0005533938 elegant_kilby[122669]:        "type": "bluestore"
Nov 24 13:27:42 np0005533938 elegant_kilby[122669]:    }
Nov 24 13:27:42 np0005533938 elegant_kilby[122669]: }
Nov 24 13:27:42 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:42 np0005533938 systemd[1]: libpod-0dd2155a4ead81b2acd30a79be157dc921ca8d243786271b16e5de6d939fac9d.scope: Deactivated successfully.
Nov 24 13:27:42 np0005533938 conmon[122669]: conmon 0dd2155a4ead81b2acd3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0dd2155a4ead81b2acd30a79be157dc921ca8d243786271b16e5de6d939fac9d.scope/container/memory.events
Nov 24 13:27:42 np0005533938 podman[122652]: 2025-11-24 18:27:42.61841343 +0000 UTC m=+1.097309621 container died 0dd2155a4ead81b2acd30a79be157dc921ca8d243786271b16e5de6d939fac9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kilby, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:27:42 np0005533938 systemd[1]: var-lib-containers-storage-overlay-c6db3010aeab685276e3a88d8f6afb3d42c807fe5627954ae0cbde02c58b5189-merged.mount: Deactivated successfully.
Nov 24 13:27:42 np0005533938 podman[122652]: 2025-11-24 18:27:42.679274381 +0000 UTC m=+1.158170572 container remove 0dd2155a4ead81b2acd30a79be157dc921ca8d243786271b16e5de6d939fac9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:27:42 np0005533938 systemd[1]: libpod-conmon-0dd2155a4ead81b2acd30a79be157dc921ca8d243786271b16e5de6d939fac9d.scope: Deactivated successfully.
Nov 24 13:27:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:27:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:27:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:27:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:27:42 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 541645d5-e97a-4098-b39d-f457a8aaf1c1 does not exist
Nov 24 13:27:42 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev bdf4c7a3-b353-47e6-a815-2ace31dbd4b4 does not exist
Nov 24 13:27:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:27:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:27:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:27:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:27:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:27:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:27:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:27:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:27:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:27:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:27:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:27:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:27:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:27:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:27:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:27:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:27:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:27:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:27:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:27:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:27:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:27:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:27:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:27:43 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:27:43 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:27:44 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:45 np0005533938 systemd[1]: session-17.scope: Deactivated successfully.
Nov 24 13:27:45 np0005533938 systemd[1]: session-17.scope: Consumed 1min 25.451s CPU time.
Nov 24 13:27:45 np0005533938 systemd-logind[822]: Session 17 logged out. Waiting for processes to exit.
Nov 24 13:27:45 np0005533938 systemd-logind[822]: Removed session 17.
Nov 24 13:27:46 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:27:48 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:50 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:27:52 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:54 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:56 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:27:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:27:58 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:00 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:02 np0005533938 systemd-logind[822]: New session 37 of user zuul.
Nov 24 13:28:02 np0005533938 systemd[1]: Started Session 37 of User zuul.
Nov 24 13:28:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:28:02 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:02 np0005533938 python3.9[122920]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 24 13:28:04 np0005533938 python3.9[123094]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:28:04 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:28:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:28:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:28:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:28:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:28:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:28:05 np0005533938 python3.9[123250]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:28:06 np0005533938 python3.9[123403]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:28:06 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:07 np0005533938 python3.9[123557]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:28:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:28:07 np0005533938 python3.9[123709]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:28:08 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:08 np0005533938 python3.9[123859]: ansible-ansible.builtin.service_facts Invoked
Nov 24 13:28:08 np0005533938 network[123876]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 13:28:08 np0005533938 network[123877]: 'network-scripts' will be removed from distribution in near future.
Nov 24 13:28:08 np0005533938 network[123878]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 13:28:10 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:12 np0005533938 python3.9[124138]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:28:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:28:12 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:13 np0005533938 python3.9[124288]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:28:14 np0005533938 python3.9[124442]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:28:14 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:15 np0005533938 python3.9[124600]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 13:28:16 np0005533938 python3.9[124684]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:28:16 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:28:18 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:20 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:28:22 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:24 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:26 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.602345) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008907602381, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1676, "num_deletes": 252, "total_data_size": 2419442, "memory_usage": 2450976, "flush_reason": "Manual Compaction"}
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008907729554, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1411355, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7347, "largest_seqno": 9022, "table_properties": {"data_size": 1405773, "index_size": 2530, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16096, "raw_average_key_size": 20, "raw_value_size": 1392659, "raw_average_value_size": 1799, "num_data_blocks": 119, "num_entries": 774, "num_filter_entries": 774, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008749, "oldest_key_time": 1764008749, "file_creation_time": 1764008907, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 127268 microseconds, and 5657 cpu microseconds.
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.729609) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1411355 bytes OK
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.729630) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.731963) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.731982) EVENT_LOG_v1 {"time_micros": 1764008907731974, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.732003) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2411970, prev total WAL file size 2411970, number of live WAL files 2.
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.733055) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1378KB)], [20(6986KB)]
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008907733147, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8565946, "oldest_snapshot_seqno": -1}
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3399 keys, 6787027 bytes, temperature: kUnknown
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008907781056, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6787027, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6761225, "index_size": 16221, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 81388, "raw_average_key_size": 23, "raw_value_size": 6696679, "raw_average_value_size": 1970, "num_data_blocks": 719, "num_entries": 3399, "num_filter_entries": 3399, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764008907, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.781313) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6787027 bytes
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.782602) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.6 rd, 141.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 6.8 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(10.9) write-amplify(4.8) OK, records in: 3841, records dropped: 442 output_compression: NoCompression
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.782619) EVENT_LOG_v1 {"time_micros": 1764008907782611, "job": 6, "event": "compaction_finished", "compaction_time_micros": 47966, "compaction_time_cpu_micros": 15177, "output_level": 6, "num_output_files": 1, "total_output_size": 6787027, "num_input_records": 3841, "num_output_records": 3399, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008907783022, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008907784300, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.732889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.784345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.784349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.784350) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.784351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:28:27 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:28:27.784353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:28:28 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:30 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:28:32 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:28:34
Nov 24 13:28:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:28:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:28:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'images', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'backups']
Nov 24 13:28:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:28:34 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:28:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:28:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:28:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:28:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:28:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:28:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:28:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:28:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:28:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:28:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:28:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:28:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:28:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:28:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:28:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:28:36 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:28:38 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:40 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:28:42 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:28:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:28:43 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:28:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:28:43 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:28:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:28:43 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev b8a543d7-4190-494c-a197-0c6f1289c639 does not exist
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 23ea1bba-f6d9-4fff-9bfb-5b1af5764a1f does not exist
Nov 24 13:28:43 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev d7430aad-7aa0-4e7d-8c19-35685ff806d3 does not exist
Nov 24 13:28:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:28:43 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:28:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:28:43 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:28:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:28:43 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:28:43 np0005533938 podman[125111]: 2025-11-24 18:28:43.959195069 +0000 UTC m=+0.050459241 container create 638e8faf7f42b84ae969ac04a7d419cd3acada495bdf86017306b26a33ffa41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:28:43 np0005533938 systemd[1]: Started libpod-conmon-638e8faf7f42b84ae969ac04a7d419cd3acada495bdf86017306b26a33ffa41b.scope.
Nov 24 13:28:44 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:28:44 np0005533938 podman[125111]: 2025-11-24 18:28:43.929391515 +0000 UTC m=+0.020655727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:28:44 np0005533938 podman[125111]: 2025-11-24 18:28:44.088608029 +0000 UTC m=+0.179872201 container init 638e8faf7f42b84ae969ac04a7d419cd3acada495bdf86017306b26a33ffa41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendel, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 13:28:44 np0005533938 podman[125111]: 2025-11-24 18:28:44.095482701 +0000 UTC m=+0.186746853 container start 638e8faf7f42b84ae969ac04a7d419cd3acada495bdf86017306b26a33ffa41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 13:28:44 np0005533938 elegant_mendel[125128]: 167 167
Nov 24 13:28:44 np0005533938 systemd[1]: libpod-638e8faf7f42b84ae969ac04a7d419cd3acada495bdf86017306b26a33ffa41b.scope: Deactivated successfully.
Nov 24 13:28:44 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:28:44 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:28:44 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:28:44 np0005533938 podman[125111]: 2025-11-24 18:28:44.127706926 +0000 UTC m=+0.218971098 container attach 638e8faf7f42b84ae969ac04a7d419cd3acada495bdf86017306b26a33ffa41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendel, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:28:44 np0005533938 podman[125111]: 2025-11-24 18:28:44.128036704 +0000 UTC m=+0.219300856 container died 638e8faf7f42b84ae969ac04a7d419cd3acada495bdf86017306b26a33ffa41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:28:44 np0005533938 systemd[1]: var-lib-containers-storage-overlay-e0d4ee0497fdbf92cf49d689120abaa9c60703fd0b7ecde065a3c8f577fa9bf3-merged.mount: Deactivated successfully.
Nov 24 13:28:44 np0005533938 podman[125111]: 2025-11-24 18:28:44.165808387 +0000 UTC m=+0.257072559 container remove 638e8faf7f42b84ae969ac04a7d419cd3acada495bdf86017306b26a33ffa41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendel, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:28:44 np0005533938 systemd[1]: libpod-conmon-638e8faf7f42b84ae969ac04a7d419cd3acada495bdf86017306b26a33ffa41b.scope: Deactivated successfully.
Nov 24 13:28:44 np0005533938 podman[125153]: 2025-11-24 18:28:44.317372001 +0000 UTC m=+0.040429851 container create bda48b0575bb25f07a94fc0ce70ab5411973e9a18afe63ff30c9c9d933a9c49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_darwin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 13:28:44 np0005533938 systemd[1]: Started libpod-conmon-bda48b0575bb25f07a94fc0ce70ab5411973e9a18afe63ff30c9c9d933a9c49a.scope.
Nov 24 13:28:44 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:28:44 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952b05c3f5c4c441a5edee1e8c327083500af48fb5b6b95349a7a0ad4db122a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:28:44 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952b05c3f5c4c441a5edee1e8c327083500af48fb5b6b95349a7a0ad4db122a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:28:44 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952b05c3f5c4c441a5edee1e8c327083500af48fb5b6b95349a7a0ad4db122a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:28:44 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952b05c3f5c4c441a5edee1e8c327083500af48fb5b6b95349a7a0ad4db122a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:28:44 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952b05c3f5c4c441a5edee1e8c327083500af48fb5b6b95349a7a0ad4db122a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:28:44 np0005533938 podman[125153]: 2025-11-24 18:28:44.299147296 +0000 UTC m=+0.022205166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:28:44 np0005533938 podman[125153]: 2025-11-24 18:28:44.400876665 +0000 UTC m=+0.123934525 container init bda48b0575bb25f07a94fc0ce70ab5411973e9a18afe63ff30c9c9d933a9c49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 13:28:44 np0005533938 podman[125153]: 2025-11-24 18:28:44.408415793 +0000 UTC m=+0.131473623 container start bda48b0575bb25f07a94fc0ce70ab5411973e9a18afe63ff30c9c9d933a9c49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_darwin, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 13:28:44 np0005533938 podman[125153]: 2025-11-24 18:28:44.411535291 +0000 UTC m=+0.134593141 container attach bda48b0575bb25f07a94fc0ce70ab5411973e9a18afe63ff30c9c9d933a9c49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:28:44 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:45 np0005533938 mystifying_darwin[125170]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:28:45 np0005533938 mystifying_darwin[125170]: --> relative data size: 1.0
Nov 24 13:28:45 np0005533938 mystifying_darwin[125170]: --> All data devices are unavailable
Nov 24 13:28:45 np0005533938 systemd[1]: libpod-bda48b0575bb25f07a94fc0ce70ab5411973e9a18afe63ff30c9c9d933a9c49a.scope: Deactivated successfully.
Nov 24 13:28:45 np0005533938 podman[125153]: 2025-11-24 18:28:45.33921652 +0000 UTC m=+1.062274360 container died bda48b0575bb25f07a94fc0ce70ab5411973e9a18afe63ff30c9c9d933a9c49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 24 13:28:45 np0005533938 systemd[1]: var-lib-containers-storage-overlay-952b05c3f5c4c441a5edee1e8c327083500af48fb5b6b95349a7a0ad4db122a3-merged.mount: Deactivated successfully.
Nov 24 13:28:45 np0005533938 podman[125153]: 2025-11-24 18:28:45.386780498 +0000 UTC m=+1.109838328 container remove bda48b0575bb25f07a94fc0ce70ab5411973e9a18afe63ff30c9c9d933a9c49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_darwin, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 13:28:45 np0005533938 systemd[1]: libpod-conmon-bda48b0575bb25f07a94fc0ce70ab5411973e9a18afe63ff30c9c9d933a9c49a.scope: Deactivated successfully.
Nov 24 13:28:45 np0005533938 podman[125355]: 2025-11-24 18:28:45.924306577 +0000 UTC m=+0.035008605 container create e99a855e84265a574f5490171479dfa507144a8ccbbf6c6b02c37ca5893b5b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_noyce, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:28:45 np0005533938 systemd[1]: Started libpod-conmon-e99a855e84265a574f5490171479dfa507144a8ccbbf6c6b02c37ca5893b5b18.scope.
Nov 24 13:28:45 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:28:46 np0005533938 podman[125355]: 2025-11-24 18:28:45.909303393 +0000 UTC m=+0.020005431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:28:46 np0005533938 podman[125355]: 2025-11-24 18:28:46.007139015 +0000 UTC m=+0.117841043 container init e99a855e84265a574f5490171479dfa507144a8ccbbf6c6b02c37ca5893b5b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:28:46 np0005533938 podman[125355]: 2025-11-24 18:28:46.013445043 +0000 UTC m=+0.124147061 container start e99a855e84265a574f5490171479dfa507144a8ccbbf6c6b02c37ca5893b5b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_noyce, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:28:46 np0005533938 podman[125355]: 2025-11-24 18:28:46.016697744 +0000 UTC m=+0.127399792 container attach e99a855e84265a574f5490171479dfa507144a8ccbbf6c6b02c37ca5893b5b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_noyce, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 13:28:46 np0005533938 interesting_noyce[125372]: 167 167
Nov 24 13:28:46 np0005533938 systemd[1]: libpod-e99a855e84265a574f5490171479dfa507144a8ccbbf6c6b02c37ca5893b5b18.scope: Deactivated successfully.
Nov 24 13:28:46 np0005533938 podman[125355]: 2025-11-24 18:28:46.018505199 +0000 UTC m=+0.129207217 container died e99a855e84265a574f5490171479dfa507144a8ccbbf6c6b02c37ca5893b5b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:28:46 np0005533938 systemd[1]: var-lib-containers-storage-overlay-e74ba456fcde87c73223e67ca09af9984afa47e0b201f16aead54ec57679bc7e-merged.mount: Deactivated successfully.
Nov 24 13:28:46 np0005533938 podman[125355]: 2025-11-24 18:28:46.060173649 +0000 UTC m=+0.170875667 container remove e99a855e84265a574f5490171479dfa507144a8ccbbf6c6b02c37ca5893b5b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 13:28:46 np0005533938 systemd[1]: libpod-conmon-e99a855e84265a574f5490171479dfa507144a8ccbbf6c6b02c37ca5893b5b18.scope: Deactivated successfully.
Nov 24 13:28:46 np0005533938 podman[125397]: 2025-11-24 18:28:46.219005224 +0000 UTC m=+0.035438725 container create 8cd1b308f4de35451a2cd6dabaf753daf2fc9819d0e05df3c315bd3ef7c98c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hodgkin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 13:28:46 np0005533938 systemd[1]: Started libpod-conmon-8cd1b308f4de35451a2cd6dabaf753daf2fc9819d0e05df3c315bd3ef7c98c6c.scope.
Nov 24 13:28:46 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:28:46 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c785c51b2dfba7dd72a8474db7422968d4785450789d1338ad203595153e72b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:28:46 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c785c51b2dfba7dd72a8474db7422968d4785450789d1338ad203595153e72b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:28:46 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c785c51b2dfba7dd72a8474db7422968d4785450789d1338ad203595153e72b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:28:46 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c785c51b2dfba7dd72a8474db7422968d4785450789d1338ad203595153e72b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:28:46 np0005533938 podman[125397]: 2025-11-24 18:28:46.29695093 +0000 UTC m=+0.113384451 container init 8cd1b308f4de35451a2cd6dabaf753daf2fc9819d0e05df3c315bd3ef7c98c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:28:46 np0005533938 podman[125397]: 2025-11-24 18:28:46.206168694 +0000 UTC m=+0.022602215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:28:46 np0005533938 podman[125397]: 2025-11-24 18:28:46.304681343 +0000 UTC m=+0.121114844 container start 8cd1b308f4de35451a2cd6dabaf753daf2fc9819d0e05df3c315bd3ef7c98c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hodgkin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Nov 24 13:28:46 np0005533938 podman[125397]: 2025-11-24 18:28:46.307718829 +0000 UTC m=+0.124152330 container attach 8cd1b308f4de35451a2cd6dabaf753daf2fc9819d0e05df3c315bd3ef7c98c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hodgkin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 13:28:46 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]: {
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:    "0": [
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:        {
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "devices": [
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "/dev/loop3"
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            ],
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "lv_name": "ceph_lv0",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "lv_size": "21470642176",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "name": "ceph_lv0",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "tags": {
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.cluster_name": "ceph",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.crush_device_class": "",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.encrypted": "0",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.osd_id": "0",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.type": "block",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.vdo": "0"
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            },
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "type": "block",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "vg_name": "ceph_vg0"
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:        }
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:    ],
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:    "1": [
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:        {
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "devices": [
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "/dev/loop4"
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            ],
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "lv_name": "ceph_lv1",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "lv_size": "21470642176",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "name": "ceph_lv1",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "tags": {
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.cluster_name": "ceph",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.crush_device_class": "",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.encrypted": "0",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.osd_id": "1",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.type": "block",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.vdo": "0"
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            },
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "type": "block",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "vg_name": "ceph_vg1"
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:        }
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:    ],
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:    "2": [
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:        {
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "devices": [
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "/dev/loop5"
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            ],
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "lv_name": "ceph_lv2",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "lv_size": "21470642176",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "name": "ceph_lv2",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "tags": {
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.cluster_name": "ceph",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.crush_device_class": "",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.encrypted": "0",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.osd_id": "2",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.type": "block",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:                "ceph.vdo": "0"
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            },
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "type": "block",
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:            "vg_name": "ceph_vg2"
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:        }
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]:    ]
Nov 24 13:28:47 np0005533938 friendly_hodgkin[125414]: }
Nov 24 13:28:47 np0005533938 systemd[1]: libpod-8cd1b308f4de35451a2cd6dabaf753daf2fc9819d0e05df3c315bd3ef7c98c6c.scope: Deactivated successfully.
Nov 24 13:28:47 np0005533938 podman[125397]: 2025-11-24 18:28:47.083478366 +0000 UTC m=+0.899911867 container died 8cd1b308f4de35451a2cd6dabaf753daf2fc9819d0e05df3c315bd3ef7c98c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hodgkin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:28:47 np0005533938 systemd[1]: var-lib-containers-storage-overlay-1c785c51b2dfba7dd72a8474db7422968d4785450789d1338ad203595153e72b-merged.mount: Deactivated successfully.
Nov 24 13:28:47 np0005533938 podman[125397]: 2025-11-24 18:28:47.159381401 +0000 UTC m=+0.975814932 container remove 8cd1b308f4de35451a2cd6dabaf753daf2fc9819d0e05df3c315bd3ef7c98c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 13:28:47 np0005533938 systemd[1]: libpod-conmon-8cd1b308f4de35451a2cd6dabaf753daf2fc9819d0e05df3c315bd3ef7c98c6c.scope: Deactivated successfully.
Nov 24 13:28:47 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:28:47 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2031 writes, 9048 keys, 2031 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2031 writes, 2031 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2031 writes, 9048 keys, 2031 commit groups, 1.0 writes per commit group, ingest: 11.00 MB, 0.02 MB/s#012Interval WAL: 2031 writes, 2031 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     50.7      0.16              0.02         3    0.054       0      0       0.0       0.0#012  L6      1/0    6.47 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    177.1    156.6      0.08              0.04         2    0.042    7197    731       0.0       0.0#012 Sum      1/0    6.47 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     60.9     87.1      0.25              0.06         5    0.049    7197    731       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     61.4     87.6      0.25              0.06         4    0.061    7197    731       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    177.1    156.6      0.08              0.04         2    0.042    7197    731       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     51.0      0.16              0.02         2    0.080       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     28.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.008, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.03 MB/s read, 0.2 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.03 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562af0cfd1f0#2 capacity: 308.00 MB usage: 590.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(36,502.97 KB,0.159474%) FilterBlock(6,28.55 KB,0.00905124%) IndexBlock(6,59.16 KB,0.0187564%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 24 13:28:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:28:47 np0005533938 podman[125576]: 2025-11-24 18:28:47.701463784 +0000 UTC m=+0.046641205 container create 655765b3cbafd88ea77b5bcd85327dd53d1886c908ee1d8d02ccf29c6124e107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hermann, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 13:28:47 np0005533938 systemd[1]: Started libpod-conmon-655765b3cbafd88ea77b5bcd85327dd53d1886c908ee1d8d02ccf29c6124e107.scope.
Nov 24 13:28:47 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:28:47 np0005533938 podman[125576]: 2025-11-24 18:28:47.681534807 +0000 UTC m=+0.026712228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:28:47 np0005533938 podman[125576]: 2025-11-24 18:28:47.862117055 +0000 UTC m=+0.207294496 container init 655765b3cbafd88ea77b5bcd85327dd53d1886c908ee1d8d02ccf29c6124e107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hermann, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 13:28:47 np0005533938 podman[125576]: 2025-11-24 18:28:47.868510335 +0000 UTC m=+0.213687756 container start 655765b3cbafd88ea77b5bcd85327dd53d1886c908ee1d8d02ccf29c6124e107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:28:47 np0005533938 podman[125576]: 2025-11-24 18:28:47.872128215 +0000 UTC m=+0.217305656 container attach 655765b3cbafd88ea77b5bcd85327dd53d1886c908ee1d8d02ccf29c6124e107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:28:47 np0005533938 intelligent_hermann[125591]: 167 167
Nov 24 13:28:47 np0005533938 systemd[1]: libpod-655765b3cbafd88ea77b5bcd85327dd53d1886c908ee1d8d02ccf29c6124e107.scope: Deactivated successfully.
Nov 24 13:28:47 np0005533938 podman[125576]: 2025-11-24 18:28:47.874704719 +0000 UTC m=+0.219882150 container died 655765b3cbafd88ea77b5bcd85327dd53d1886c908ee1d8d02ccf29c6124e107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hermann, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 13:28:47 np0005533938 systemd[1]: var-lib-containers-storage-overlay-0721ba48362adbb4cae5897964e05ae59a5cc9390f048e5e03baaca416331697-merged.mount: Deactivated successfully.
Nov 24 13:28:47 np0005533938 podman[125576]: 2025-11-24 18:28:47.911483548 +0000 UTC m=+0.256660979 container remove 655765b3cbafd88ea77b5bcd85327dd53d1886c908ee1d8d02ccf29c6124e107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:28:47 np0005533938 systemd[1]: libpod-conmon-655765b3cbafd88ea77b5bcd85327dd53d1886c908ee1d8d02ccf29c6124e107.scope: Deactivated successfully.
Nov 24 13:28:48 np0005533938 podman[125615]: 2025-11-24 18:28:48.059939024 +0000 UTC m=+0.035094447 container create 8c89d8c6b8cfc0d4bf725515f1ed532ff426e1b7142e3d6bb420d41697082e2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:28:48 np0005533938 systemd[1]: Started libpod-conmon-8c89d8c6b8cfc0d4bf725515f1ed532ff426e1b7142e3d6bb420d41697082e2c.scope.
Nov 24 13:28:48 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:28:48 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d9b68526bf1b9ed58cfdc2197a820e0d102d48ff785559311659c1333191e55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:28:48 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d9b68526bf1b9ed58cfdc2197a820e0d102d48ff785559311659c1333191e55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:28:48 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d9b68526bf1b9ed58cfdc2197a820e0d102d48ff785559311659c1333191e55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:28:48 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d9b68526bf1b9ed58cfdc2197a820e0d102d48ff785559311659c1333191e55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:28:48 np0005533938 podman[125615]: 2025-11-24 18:28:48.118047705 +0000 UTC m=+0.093203148 container init 8c89d8c6b8cfc0d4bf725515f1ed532ff426e1b7142e3d6bb420d41697082e2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 13:28:48 np0005533938 podman[125615]: 2025-11-24 18:28:48.126963257 +0000 UTC m=+0.102118680 container start 8c89d8c6b8cfc0d4bf725515f1ed532ff426e1b7142e3d6bb420d41697082e2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lehmann, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 13:28:48 np0005533938 podman[125615]: 2025-11-24 18:28:48.133180022 +0000 UTC m=+0.108335475 container attach 8c89d8c6b8cfc0d4bf725515f1ed532ff426e1b7142e3d6bb420d41697082e2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 13:28:48 np0005533938 podman[125615]: 2025-11-24 18:28:48.044396186 +0000 UTC m=+0.019551629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:28:48 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:49 np0005533938 dazzling_lehmann[125632]: {
Nov 24 13:28:49 np0005533938 dazzling_lehmann[125632]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:28:49 np0005533938 dazzling_lehmann[125632]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:28:49 np0005533938 dazzling_lehmann[125632]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:28:49 np0005533938 dazzling_lehmann[125632]:        "osd_id": 0,
Nov 24 13:28:49 np0005533938 dazzling_lehmann[125632]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:28:49 np0005533938 dazzling_lehmann[125632]:        "type": "bluestore"
Nov 24 13:28:49 np0005533938 dazzling_lehmann[125632]:    },
Nov 24 13:28:49 np0005533938 dazzling_lehmann[125632]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:28:49 np0005533938 dazzling_lehmann[125632]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:28:49 np0005533938 dazzling_lehmann[125632]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:28:49 np0005533938 dazzling_lehmann[125632]:        "osd_id": 1,
Nov 24 13:28:49 np0005533938 dazzling_lehmann[125632]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:28:49 np0005533938 dazzling_lehmann[125632]:        "type": "bluestore"
Nov 24 13:28:49 np0005533938 dazzling_lehmann[125632]:    },
Nov 24 13:28:49 np0005533938 dazzling_lehmann[125632]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:28:49 np0005533938 dazzling_lehmann[125632]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:28:49 np0005533938 dazzling_lehmann[125632]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:28:49 np0005533938 dazzling_lehmann[125632]:        "osd_id": 2,
Nov 24 13:28:49 np0005533938 dazzling_lehmann[125632]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:28:49 np0005533938 dazzling_lehmann[125632]:        "type": "bluestore"
Nov 24 13:28:49 np0005533938 dazzling_lehmann[125632]:    }
Nov 24 13:28:49 np0005533938 dazzling_lehmann[125632]: }
Nov 24 13:28:49 np0005533938 systemd[1]: libpod-8c89d8c6b8cfc0d4bf725515f1ed532ff426e1b7142e3d6bb420d41697082e2c.scope: Deactivated successfully.
Nov 24 13:28:49 np0005533938 podman[125615]: 2025-11-24 18:28:49.059917828 +0000 UTC m=+1.035073251 container died 8c89d8c6b8cfc0d4bf725515f1ed532ff426e1b7142e3d6bb420d41697082e2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lehmann, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:28:49 np0005533938 systemd[1]: var-lib-containers-storage-overlay-2d9b68526bf1b9ed58cfdc2197a820e0d102d48ff785559311659c1333191e55-merged.mount: Deactivated successfully.
Nov 24 13:28:49 np0005533938 podman[125615]: 2025-11-24 18:28:49.125463064 +0000 UTC m=+1.100618497 container remove 8c89d8c6b8cfc0d4bf725515f1ed532ff426e1b7142e3d6bb420d41697082e2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 24 13:28:49 np0005533938 systemd[1]: libpod-conmon-8c89d8c6b8cfc0d4bf725515f1ed532ff426e1b7142e3d6bb420d41697082e2c.scope: Deactivated successfully.
Nov 24 13:28:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:28:49 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:28:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:28:49 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:28:49 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 78fa0eff-b1b3-4ab6-9acc-68799d157e74 does not exist
Nov 24 13:28:49 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 3faf0d27-436b-47af-b68b-fb443192d04a does not exist
Nov 24 13:28:50 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:28:50 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:28:50 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:28:52 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:54 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:56 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:28:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:28:58 np0005533938 python3.9[125878]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:28:58 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:00 np0005533938 python3.9[126165]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 24 13:29:00 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:01 np0005533938 python3.9[126317]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 24 13:29:02 np0005533938 python3.9[126471]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:29:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:29:02 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:03 np0005533938 python3.9[126623]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 24 13:29:04 np0005533938 python3.9[126775]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:29:04 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:29:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:29:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:29:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:29:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:29:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:29:04 np0005533938 python3.9[126927]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:29:05 np0005533938 python3.9[127005]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:29:06 np0005533938 python3.9[127157]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:29:06 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:07 np0005533938 python3.9[127311]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 24 13:29:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:29:07 np0005533938 python3.9[127464]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 24 13:29:08 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:08 np0005533938 python3.9[127617]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 13:29:09 np0005533938 python3.9[127769]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 24 13:29:10 np0005533938 python3.9[127921]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:29:10 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:12 np0005533938 python3.9[128074]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:29:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:29:12 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:13 np0005533938 python3.9[128226]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:29:13 np0005533938 python3.9[128304]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:29:14 np0005533938 python3.9[128456]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:29:14 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:14 np0005533938 python3.9[128534]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:29:15 np0005533938 python3.9[128686]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:29:16 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:29:17 np0005533938 python3.9[128837]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:29:18 np0005533938 python3.9[128989]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 24 13:29:18 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:19 np0005533938 python3.9[129139]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:29:20 np0005533938 python3.9[129291]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:29:20 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:20 np0005533938 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 24 13:29:20 np0005533938 systemd[1]: tuned.service: Deactivated successfully.
Nov 24 13:29:20 np0005533938 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 24 13:29:20 np0005533938 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 24 13:29:20 np0005533938 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 24 13:29:21 np0005533938 python3.9[129453]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 24 13:29:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:29:22 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:23 np0005533938 python3.9[129605]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:29:24 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:24 np0005533938 python3.9[129759]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:29:25 np0005533938 systemd[1]: session-37.scope: Deactivated successfully.
Nov 24 13:29:25 np0005533938 systemd[1]: session-37.scope: Consumed 53.477s CPU time.
Nov 24 13:29:25 np0005533938 systemd-logind[822]: Session 37 logged out. Waiting for processes to exit.
Nov 24 13:29:25 np0005533938 systemd-logind[822]: Removed session 37.
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.728091) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008965728141, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 700, "num_deletes": 251, "total_data_size": 872275, "memory_usage": 885096, "flush_reason": "Manual Compaction"}
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008965736734, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 864559, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9023, "largest_seqno": 9722, "table_properties": {"data_size": 860913, "index_size": 1490, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7914, "raw_average_key_size": 18, "raw_value_size": 853616, "raw_average_value_size": 1999, "num_data_blocks": 69, "num_entries": 427, "num_filter_entries": 427, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008907, "oldest_key_time": 1764008907, "file_creation_time": 1764008965, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 8678 microseconds, and 5338 cpu microseconds.
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.736774) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 864559 bytes OK
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.736792) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.738212) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.738227) EVENT_LOG_v1 {"time_micros": 1764008965738222, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.738243) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 868636, prev total WAL file size 868636, number of live WAL files 2.
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.738845) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(844KB)], [23(6627KB)]
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008965738867, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7651586, "oldest_snapshot_seqno": -1}
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3312 keys, 6143227 bytes, temperature: kUnknown
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008965772728, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6143227, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6119099, "index_size": 14739, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 80377, "raw_average_key_size": 24, "raw_value_size": 6057190, "raw_average_value_size": 1828, "num_data_blocks": 644, "num_entries": 3312, "num_filter_entries": 3312, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764008965, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.772973) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6143227 bytes
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.774376) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 225.6 rd, 181.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 6.5 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(16.0) write-amplify(7.1) OK, records in: 3826, records dropped: 514 output_compression: NoCompression
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.774395) EVENT_LOG_v1 {"time_micros": 1764008965774386, "job": 8, "event": "compaction_finished", "compaction_time_micros": 33922, "compaction_time_cpu_micros": 17576, "output_level": 6, "num_output_files": 1, "total_output_size": 6143227, "num_input_records": 3826, "num_output_records": 3312, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008965774652, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764008965776089, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.738750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.776137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.776143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.776146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.776150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:29:25 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:29:25.776153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:29:26 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:29:28 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:30 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:31 np0005533938 systemd-logind[822]: New session 38 of user zuul.
Nov 24 13:29:31 np0005533938 systemd[1]: Started Session 38 of User zuul.
Nov 24 13:29:32 np0005533938 python3.9[129939]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:29:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:29:32 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:33 np0005533938 python3.9[130095]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 24 13:29:34 np0005533938 python3.9[130248]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 13:29:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:29:34
Nov 24 13:29:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:29:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:29:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'images', 'vms']
Nov 24 13:29:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:29:34 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:29:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:29:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:29:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:29:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:29:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:29:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:29:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:29:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:29:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:29:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:29:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:29:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:29:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:29:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:29:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:29:35 np0005533938 python3.9[130332]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 13:29:36 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:37 np0005533938 python3.9[130485]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:29:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:29:38 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:39 np0005533938 python3.9[130638]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 13:29:40 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:40 np0005533938 python3.9[130791]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:29:41 np0005533938 python3.9[130943]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 24 13:29:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:29:42 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:42 np0005533938 python3.9[131093]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:29:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:29:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:29:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:29:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:29:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:29:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:29:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:29:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:29:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:29:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:29:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:29:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:29:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:29:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:29:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:29:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:29:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:29:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:29:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:29:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:29:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:29:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:29:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:29:43 np0005533938 python3.9[131251]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:29:44 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:45 np0005533938 python3.9[131404]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:29:46 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:47 np0005533938 python3.9[131691]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 13:29:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:29:48 np0005533938 python3.9[131841]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:29:48 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:48 np0005533938 python3.9[131995]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:29:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:29:49 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:29:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:29:49 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:29:49 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:29:49 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:29:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:29:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:29:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:29:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:29:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:29:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:29:50 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 12a61a0a-98cc-4cef-a06a-2c3909c2ade8 does not exist
Nov 24 13:29:50 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev c5b823b5-2d1b-4cfd-a46c-79e989ab1ec3 does not exist
Nov 24 13:29:50 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 00b462ac-eded-4e3b-93fd-f512c71144ee does not exist
Nov 24 13:29:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:29:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:29:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:29:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:29:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:29:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:29:50 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:50 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:29:50 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:29:50 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:29:50 np0005533938 python3.9[132422]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:29:51 np0005533938 podman[132540]: 2025-11-24 18:29:51.121785139 +0000 UTC m=+0.052791156 container create 614a462754560c58e7bd367c970a2ef6682bd3b1dfce5e168cc0d1c09b762710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:29:51 np0005533938 systemd[1]: Started libpod-conmon-614a462754560c58e7bd367c970a2ef6682bd3b1dfce5e168cc0d1c09b762710.scope.
Nov 24 13:29:51 np0005533938 podman[132540]: 2025-11-24 18:29:51.093539197 +0000 UTC m=+0.024545264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:29:51 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:29:51 np0005533938 podman[132540]: 2025-11-24 18:29:51.214601722 +0000 UTC m=+0.145607739 container init 614a462754560c58e7bd367c970a2ef6682bd3b1dfce5e168cc0d1c09b762710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 13:29:51 np0005533938 podman[132540]: 2025-11-24 18:29:51.225637228 +0000 UTC m=+0.156643215 container start 614a462754560c58e7bd367c970a2ef6682bd3b1dfce5e168cc0d1c09b762710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:29:51 np0005533938 podman[132540]: 2025-11-24 18:29:51.229087432 +0000 UTC m=+0.160093429 container attach 614a462754560c58e7bd367c970a2ef6682bd3b1dfce5e168cc0d1c09b762710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 13:29:51 np0005533938 funny_mahavira[132557]: 167 167
Nov 24 13:29:51 np0005533938 podman[132540]: 2025-11-24 18:29:51.230784453 +0000 UTC m=+0.161790440 container died 614a462754560c58e7bd367c970a2ef6682bd3b1dfce5e168cc0d1c09b762710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 13:29:51 np0005533938 systemd[1]: libpod-614a462754560c58e7bd367c970a2ef6682bd3b1dfce5e168cc0d1c09b762710.scope: Deactivated successfully.
Nov 24 13:29:51 np0005533938 systemd[1]: var-lib-containers-storage-overlay-81ad6d95006f968323db6b4f545c641c5c38b8cbfb6a4e1cfef9ac53f0800580-merged.mount: Deactivated successfully.
Nov 24 13:29:51 np0005533938 podman[132540]: 2025-11-24 18:29:51.274753145 +0000 UTC m=+0.205759132 container remove 614a462754560c58e7bd367c970a2ef6682bd3b1dfce5e168cc0d1c09b762710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:29:51 np0005533938 systemd[1]: libpod-conmon-614a462754560c58e7bd367c970a2ef6682bd3b1dfce5e168cc0d1c09b762710.scope: Deactivated successfully.
Nov 24 13:29:51 np0005533938 podman[132582]: 2025-11-24 18:29:51.477555415 +0000 UTC m=+0.059590391 container create 56bb7c5365aed8ba0288c82650ce5f5c0acd2bb5f62e4c440bc617575b33e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kirch, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 13:29:51 np0005533938 systemd[1]: Started libpod-conmon-56bb7c5365aed8ba0288c82650ce5f5c0acd2bb5f62e4c440bc617575b33e62b.scope.
Nov 24 13:29:51 np0005533938 podman[132582]: 2025-11-24 18:29:51.45211723 +0000 UTC m=+0.034152296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:29:51 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:29:51 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dcc3a35ef3c5d8f08939d7564e1b60751df759f18ec252a970d092aec3682f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:29:51 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dcc3a35ef3c5d8f08939d7564e1b60751df759f18ec252a970d092aec3682f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:29:51 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dcc3a35ef3c5d8f08939d7564e1b60751df759f18ec252a970d092aec3682f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:29:51 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dcc3a35ef3c5d8f08939d7564e1b60751df759f18ec252a970d092aec3682f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:29:51 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dcc3a35ef3c5d8f08939d7564e1b60751df759f18ec252a970d092aec3682f5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:29:51 np0005533938 podman[132582]: 2025-11-24 18:29:51.619234507 +0000 UTC m=+0.201269543 container init 56bb7c5365aed8ba0288c82650ce5f5c0acd2bb5f62e4c440bc617575b33e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 13:29:51 np0005533938 podman[132582]: 2025-11-24 18:29:51.627572719 +0000 UTC m=+0.209607745 container start 56bb7c5365aed8ba0288c82650ce5f5c0acd2bb5f62e4c440bc617575b33e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kirch, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:29:51 np0005533938 podman[132582]: 2025-11-24 18:29:51.644935778 +0000 UTC m=+0.226970774 container attach 56bb7c5365aed8ba0288c82650ce5f5c0acd2bb5f62e4c440bc617575b33e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kirch, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 13:29:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:29:52 np0005533938 trusting_kirch[132599]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:29:52 np0005533938 trusting_kirch[132599]: --> relative data size: 1.0
Nov 24 13:29:52 np0005533938 trusting_kirch[132599]: --> All data devices are unavailable
Nov 24 13:29:52 np0005533938 systemd[1]: libpod-56bb7c5365aed8ba0288c82650ce5f5c0acd2bb5f62e4c440bc617575b33e62b.scope: Deactivated successfully.
Nov 24 13:29:52 np0005533938 podman[132582]: 2025-11-24 18:29:52.600024563 +0000 UTC m=+1.182059549 container died 56bb7c5365aed8ba0288c82650ce5f5c0acd2bb5f62e4c440bc617575b33e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kirch, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:29:52 np0005533938 systemd[1]: var-lib-containers-storage-overlay-3dcc3a35ef3c5d8f08939d7564e1b60751df759f18ec252a970d092aec3682f5-merged.mount: Deactivated successfully.
Nov 24 13:29:52 np0005533938 podman[132582]: 2025-11-24 18:29:52.64795394 +0000 UTC m=+1.229988926 container remove 56bb7c5365aed8ba0288c82650ce5f5c0acd2bb5f62e4c440bc617575b33e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:29:52 np0005533938 systemd[1]: libpod-conmon-56bb7c5365aed8ba0288c82650ce5f5c0acd2bb5f62e4c440bc617575b33e62b.scope: Deactivated successfully.
Nov 24 13:29:52 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:53 np0005533938 python3.9[132841]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:29:53 np0005533938 podman[133011]: 2025-11-24 18:29:53.364221054 +0000 UTC m=+0.050798838 container create 45a9b91b02d889422736965960f487a3fd6b4ae5973f8df0d23406e4943bc00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cartwright, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 13:29:53 np0005533938 systemd[1]: Started libpod-conmon-45a9b91b02d889422736965960f487a3fd6b4ae5973f8df0d23406e4943bc00a.scope.
Nov 24 13:29:53 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:29:53 np0005533938 podman[133011]: 2025-11-24 18:29:53.348143945 +0000 UTC m=+0.034721729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:29:53 np0005533938 podman[133011]: 2025-11-24 18:29:53.44889589 +0000 UTC m=+0.135473714 container init 45a9b91b02d889422736965960f487a3fd6b4ae5973f8df0d23406e4943bc00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cartwright, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:29:53 np0005533938 podman[133011]: 2025-11-24 18:29:53.456574095 +0000 UTC m=+0.143151859 container start 45a9b91b02d889422736965960f487a3fd6b4ae5973f8df0d23406e4943bc00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cartwright, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:29:53 np0005533938 podman[133011]: 2025-11-24 18:29:53.460353776 +0000 UTC m=+0.146931590 container attach 45a9b91b02d889422736965960f487a3fd6b4ae5973f8df0d23406e4943bc00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:29:53 np0005533938 tender_cartwright[133027]: 167 167
Nov 24 13:29:53 np0005533938 systemd[1]: libpod-45a9b91b02d889422736965960f487a3fd6b4ae5973f8df0d23406e4943bc00a.scope: Deactivated successfully.
Nov 24 13:29:53 np0005533938 podman[133011]: 2025-11-24 18:29:53.464603049 +0000 UTC m=+0.151180833 container died 45a9b91b02d889422736965960f487a3fd6b4ae5973f8df0d23406e4943bc00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cartwright, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 13:29:53 np0005533938 systemd[1]: var-lib-containers-storage-overlay-a9e3a0849ba90a9678715de863173b0d666416fe4147598fc077cea570608201-merged.mount: Deactivated successfully.
Nov 24 13:29:53 np0005533938 podman[133011]: 2025-11-24 18:29:53.513762017 +0000 UTC m=+0.200339811 container remove 45a9b91b02d889422736965960f487a3fd6b4ae5973f8df0d23406e4943bc00a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cartwright, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:29:53 np0005533938 systemd[1]: libpod-conmon-45a9b91b02d889422736965960f487a3fd6b4ae5973f8df0d23406e4943bc00a.scope: Deactivated successfully.
Nov 24 13:29:53 np0005533938 podman[133110]: 2025-11-24 18:29:53.708089401 +0000 UTC m=+0.047982870 container create 53242f617ed147432dfaae7464c06a0995331fff17a3d94ba63e153889f5f3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:29:53 np0005533938 systemd[1]: Started libpod-conmon-53242f617ed147432dfaae7464c06a0995331fff17a3d94ba63e153889f5f3cd.scope.
Nov 24 13:29:53 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:29:53 np0005533938 podman[133110]: 2025-11-24 18:29:53.68941501 +0000 UTC m=+0.029308569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:29:53 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02a36a02d798454e09198ac0ad15defe793fb784a0a63dbd764f056cb9f1534a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:29:53 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02a36a02d798454e09198ac0ad15defe793fb784a0a63dbd764f056cb9f1534a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:29:53 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02a36a02d798454e09198ac0ad15defe793fb784a0a63dbd764f056cb9f1534a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:29:53 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02a36a02d798454e09198ac0ad15defe793fb784a0a63dbd764f056cb9f1534a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:29:53 np0005533938 podman[133110]: 2025-11-24 18:29:53.79617529 +0000 UTC m=+0.136068759 container init 53242f617ed147432dfaae7464c06a0995331fff17a3d94ba63e153889f5f3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 13:29:53 np0005533938 podman[133110]: 2025-11-24 18:29:53.804022119 +0000 UTC m=+0.143915588 container start 53242f617ed147432dfaae7464c06a0995331fff17a3d94ba63e153889f5f3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 13:29:53 np0005533938 podman[133110]: 2025-11-24 18:29:53.80735267 +0000 UTC m=+0.147246139 container attach 53242f617ed147432dfaae7464c06a0995331fff17a3d94ba63e153889f5f3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 13:29:53 np0005533938 python3.9[133136]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]: {
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:    "0": [
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:        {
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "devices": [
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "/dev/loop3"
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            ],
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "lv_name": "ceph_lv0",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "lv_size": "21470642176",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "name": "ceph_lv0",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "tags": {
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.cluster_name": "ceph",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.crush_device_class": "",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.encrypted": "0",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.osd_id": "0",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.type": "block",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.vdo": "0"
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            },
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "type": "block",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "vg_name": "ceph_vg0"
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:        }
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:    ],
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:    "1": [
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:        {
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "devices": [
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "/dev/loop4"
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            ],
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "lv_name": "ceph_lv1",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "lv_size": "21470642176",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "name": "ceph_lv1",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "tags": {
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.cluster_name": "ceph",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.crush_device_class": "",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.encrypted": "0",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.osd_id": "1",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.type": "block",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.vdo": "0"
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            },
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "type": "block",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "vg_name": "ceph_vg1"
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:        }
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:    ],
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:    "2": [
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:        {
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "devices": [
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "/dev/loop5"
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            ],
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "lv_name": "ceph_lv2",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "lv_size": "21470642176",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "name": "ceph_lv2",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "tags": {
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.cluster_name": "ceph",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.crush_device_class": "",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.encrypted": "0",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.osd_id": "2",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.type": "block",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:                "ceph.vdo": "0"
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            },
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "type": "block",
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:            "vg_name": "ceph_vg2"
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:        }
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]:    ]
Nov 24 13:29:54 np0005533938 recursing_vaughan[133142]: }
Nov 24 13:29:54 np0005533938 systemd[1]: libpod-53242f617ed147432dfaae7464c06a0995331fff17a3d94ba63e153889f5f3cd.scope: Deactivated successfully.
Nov 24 13:29:54 np0005533938 podman[133110]: 2025-11-24 18:29:54.536994727 +0000 UTC m=+0.876888196 container died 53242f617ed147432dfaae7464c06a0995331fff17a3d94ba63e153889f5f3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_vaughan, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 13:29:54 np0005533938 systemd[1]: var-lib-containers-storage-overlay-02a36a02d798454e09198ac0ad15defe793fb784a0a63dbd764f056cb9f1534a-merged.mount: Deactivated successfully.
Nov 24 13:29:54 np0005533938 podman[133110]: 2025-11-24 18:29:54.647296472 +0000 UTC m=+0.987189941 container remove 53242f617ed147432dfaae7464c06a0995331fff17a3d94ba63e153889f5f3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_vaughan, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 13:29:54 np0005533938 systemd[1]: libpod-conmon-53242f617ed147432dfaae7464c06a0995331fff17a3d94ba63e153889f5f3cd.scope: Deactivated successfully.
Nov 24 13:29:54 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:54 np0005533938 systemd[1]: session-38.scope: Deactivated successfully.
Nov 24 13:29:54 np0005533938 systemd[1]: session-38.scope: Consumed 17.604s CPU time.
Nov 24 13:29:54 np0005533938 systemd-logind[822]: Session 38 logged out. Waiting for processes to exit.
Nov 24 13:29:54 np0005533938 systemd-logind[822]: Removed session 38.
Nov 24 13:29:55 np0005533938 podman[133327]: 2025-11-24 18:29:55.17620961 +0000 UTC m=+0.032462755 container create d6ffae72d6ff8daac0014446fbe794661295f8c2d27c4fbb7765c86561f722aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jang, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:29:55 np0005533938 systemd[1]: Started libpod-conmon-d6ffae72d6ff8daac0014446fbe794661295f8c2d27c4fbb7765c86561f722aa.scope.
Nov 24 13:29:55 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:29:55 np0005533938 podman[133327]: 2025-11-24 18:29:55.244843538 +0000 UTC m=+0.101096703 container init d6ffae72d6ff8daac0014446fbe794661295f8c2d27c4fbb7765c86561f722aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:29:55 np0005533938 podman[133327]: 2025-11-24 18:29:55.250985647 +0000 UTC m=+0.107238792 container start d6ffae72d6ff8daac0014446fbe794661295f8c2d27c4fbb7765c86561f722aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:29:55 np0005533938 podman[133327]: 2025-11-24 18:29:55.253773094 +0000 UTC m=+0.110026259 container attach d6ffae72d6ff8daac0014446fbe794661295f8c2d27c4fbb7765c86561f722aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 13:29:55 np0005533938 vibrant_jang[133344]: 167 167
Nov 24 13:29:55 np0005533938 systemd[1]: libpod-d6ffae72d6ff8daac0014446fbe794661295f8c2d27c4fbb7765c86561f722aa.scope: Deactivated successfully.
Nov 24 13:29:55 np0005533938 podman[133327]: 2025-11-24 18:29:55.255383653 +0000 UTC m=+0.111636798 container died d6ffae72d6ff8daac0014446fbe794661295f8c2d27c4fbb7765c86561f722aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jang, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 13:29:55 np0005533938 podman[133327]: 2025-11-24 18:29:55.161060844 +0000 UTC m=+0.017314009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:29:55 np0005533938 systemd[1]: var-lib-containers-storage-overlay-dd6eb7f4a9fff75f50879d39516b2693be4fcb255477063fa44d8e428d1fad92-merged.mount: Deactivated successfully.
Nov 24 13:29:55 np0005533938 podman[133327]: 2025-11-24 18:29:55.304463849 +0000 UTC m=+0.160716994 container remove d6ffae72d6ff8daac0014446fbe794661295f8c2d27c4fbb7765c86561f722aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jang, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:29:55 np0005533938 systemd[1]: libpod-conmon-d6ffae72d6ff8daac0014446fbe794661295f8c2d27c4fbb7765c86561f722aa.scope: Deactivated successfully.
Nov 24 13:29:55 np0005533938 podman[133371]: 2025-11-24 18:29:55.453438208 +0000 UTC m=+0.040503340 container create d301a9675e6c597217d353ef53aff158bf69e578b0108110cbbeb60e4f69a8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 13:29:55 np0005533938 systemd[1]: Started libpod-conmon-d301a9675e6c597217d353ef53aff158bf69e578b0108110cbbeb60e4f69a8ca.scope.
Nov 24 13:29:55 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:29:55 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7c82043014fd0b342a3e8430c446b165b32548cde65697880f823c1d2cde83c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:29:55 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7c82043014fd0b342a3e8430c446b165b32548cde65697880f823c1d2cde83c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:29:55 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7c82043014fd0b342a3e8430c446b165b32548cde65697880f823c1d2cde83c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:29:55 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7c82043014fd0b342a3e8430c446b165b32548cde65697880f823c1d2cde83c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:29:55 np0005533938 podman[133371]: 2025-11-24 18:29:55.517952176 +0000 UTC m=+0.105017298 container init d301a9675e6c597217d353ef53aff158bf69e578b0108110cbbeb60e4f69a8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 13:29:55 np0005533938 podman[133371]: 2025-11-24 18:29:55.523447989 +0000 UTC m=+0.110513131 container start d301a9675e6c597217d353ef53aff158bf69e578b0108110cbbeb60e4f69a8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wozniak, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 24 13:29:55 np0005533938 podman[133371]: 2025-11-24 18:29:55.527378944 +0000 UTC m=+0.114444076 container attach d301a9675e6c597217d353ef53aff158bf69e578b0108110cbbeb60e4f69a8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wozniak, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:29:55 np0005533938 podman[133371]: 2025-11-24 18:29:55.433951457 +0000 UTC m=+0.021016589 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:29:56 np0005533938 confident_wozniak[133387]: {
Nov 24 13:29:56 np0005533938 confident_wozniak[133387]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:29:56 np0005533938 confident_wozniak[133387]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:29:56 np0005533938 confident_wozniak[133387]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:29:56 np0005533938 confident_wozniak[133387]:        "osd_id": 0,
Nov 24 13:29:56 np0005533938 confident_wozniak[133387]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:29:56 np0005533938 confident_wozniak[133387]:        "type": "bluestore"
Nov 24 13:29:56 np0005533938 confident_wozniak[133387]:    },
Nov 24 13:29:56 np0005533938 confident_wozniak[133387]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:29:56 np0005533938 confident_wozniak[133387]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:29:56 np0005533938 confident_wozniak[133387]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:29:56 np0005533938 confident_wozniak[133387]:        "osd_id": 1,
Nov 24 13:29:56 np0005533938 confident_wozniak[133387]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:29:56 np0005533938 confident_wozniak[133387]:        "type": "bluestore"
Nov 24 13:29:56 np0005533938 confident_wozniak[133387]:    },
Nov 24 13:29:56 np0005533938 confident_wozniak[133387]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:29:56 np0005533938 confident_wozniak[133387]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:29:56 np0005533938 confident_wozniak[133387]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:29:56 np0005533938 confident_wozniak[133387]:        "osd_id": 2,
Nov 24 13:29:56 np0005533938 confident_wozniak[133387]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:29:56 np0005533938 confident_wozniak[133387]:        "type": "bluestore"
Nov 24 13:29:56 np0005533938 confident_wozniak[133387]:    }
Nov 24 13:29:56 np0005533938 confident_wozniak[133387]: }
Nov 24 13:29:56 np0005533938 systemd[1]: libpod-d301a9675e6c597217d353ef53aff158bf69e578b0108110cbbeb60e4f69a8ca.scope: Deactivated successfully.
Nov 24 13:29:56 np0005533938 podman[133371]: 2025-11-24 18:29:56.480355606 +0000 UTC m=+1.067420728 container died d301a9675e6c597217d353ef53aff158bf69e578b0108110cbbeb60e4f69a8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wozniak, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 13:29:56 np0005533938 systemd[1]: var-lib-containers-storage-overlay-f7c82043014fd0b342a3e8430c446b165b32548cde65697880f823c1d2cde83c-merged.mount: Deactivated successfully.
Nov 24 13:29:56 np0005533938 podman[133371]: 2025-11-24 18:29:56.535874988 +0000 UTC m=+1.122940120 container remove d301a9675e6c597217d353ef53aff158bf69e578b0108110cbbeb60e4f69a8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wozniak, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:29:56 np0005533938 systemd[1]: libpod-conmon-d301a9675e6c597217d353ef53aff158bf69e578b0108110cbbeb60e4f69a8ca.scope: Deactivated successfully.
Nov 24 13:29:56 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:29:56 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:29:56 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:29:56 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:29:56 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev a09f657f-ee9a-4e5b-ba35-ad49ce1a5968 does not exist
Nov 24 13:29:56 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 4e0b5366-6db9-4c11-91f3-2e8e92e75fdf does not exist
Nov 24 13:29:56 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:29:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:29:57 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:29:57 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:29:59 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:00 np0005533938 systemd-logind[822]: New session 39 of user zuul.
Nov 24 13:30:00 np0005533938 systemd[1]: Started Session 39 of User zuul.
Nov 24 13:30:01 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:01 np0005533938 python3.9[133638]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:30:02 np0005533938 python3.9[133792]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 13:30:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:30:03 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:03 np0005533938 python3.9[133985]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:30:03 np0005533938 systemd[1]: session-39.scope: Deactivated successfully.
Nov 24 13:30:03 np0005533938 systemd[1]: session-39.scope: Consumed 2.154s CPU time.
Nov 24 13:30:03 np0005533938 systemd-logind[822]: Session 39 logged out. Waiting for processes to exit.
Nov 24 13:30:03 np0005533938 systemd-logind[822]: Removed session 39.
Nov 24 13:30:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:30:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:30:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:30:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:30:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:30:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:30:05 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:07 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:30:09 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:09 np0005533938 systemd-logind[822]: New session 40 of user zuul.
Nov 24 13:30:09 np0005533938 systemd[1]: Started Session 40 of User zuul.
Nov 24 13:30:10 np0005533938 python3.9[134165]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:30:11 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:11 np0005533938 python3.9[134319]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:30:12 np0005533938 python3.9[134475]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 13:30:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:30:13 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:13 np0005533938 python3.9[134559]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:30:15 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:15 np0005533938 python3.9[134712]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 13:30:16 np0005533938 python3.9[134907]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:30:17 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:17 np0005533938 python3.9[135059]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:30:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:30:18 np0005533938 python3.9[135224]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:30:18 np0005533938 python3.9[135302]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:30:19 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:19 np0005533938 python3.9[135454]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:30:19 np0005533938 python3.9[135532]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:30:20 np0005533938 python3.9[135684]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:30:21 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:21 np0005533938 python3.9[135836]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:30:22 np0005533938 python3.9[135988]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:30:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:30:22 np0005533938 python3.9[136140]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:30:23 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:23 np0005533938 python3.9[136292]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:30:25 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:25 np0005533938 python3.9[136445]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:30:26 np0005533938 python3.9[136599]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:30:26 np0005533938 python3.9[136751]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:30:27 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:30:27 np0005533938 python3.9[136903]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:30:28 np0005533938 python3.9[137057]: ansible-service_facts Invoked
Nov 24 13:30:28 np0005533938 network[137074]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 13:30:28 np0005533938 network[137075]: 'network-scripts' will be removed from distribution in near future.
Nov 24 13:30:28 np0005533938 network[137076]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 13:30:29 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:31 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:30:33 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:33 np0005533938 python3.9[137528]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:30:33 np0005533938 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:30:33 np0005533938 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 5370 writes, 23K keys, 5370 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5370 writes, 751 syncs, 7.15 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5370 writes, 23K keys, 5370 commit groups, 1.0 writes per commit group, ingest: 18.36 MB, 0.03 MB/s#012Interval WAL: 5370 writes, 751 syncs, 7.15 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 24 13:30:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:30:34
Nov 24 13:30:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:30:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:30:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'images', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'vms']
Nov 24 13:30:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:30:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:30:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:30:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:30:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:30:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:30:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:30:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:30:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:30:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:30:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:30:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:30:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:30:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:30:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:30:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:30:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:30:35 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:35 np0005533938 python3.9[137681]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 24 13:30:37 np0005533938 python3.9[137833]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:30:37 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:37 np0005533938 python3.9[137911]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:30:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:30:38 np0005533938 python3.9[138063]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:30:38 np0005533938 python3.9[138141]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:30:39 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:39 np0005533938 python3.9[138293]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:30:40 np0005533938 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:30:40 np0005533938 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 601.0 total, 600.0 interval#012Cumulative writes: 6505 writes, 27K keys, 6505 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6505 writes, 1119 syncs, 5.81 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6505 writes, 27K keys, 6505 commit groups, 1.0 writes per commit group, ingest: 19.27 MB, 0.03 MB/s#012Interval WAL: 6505 writes, 1119 syncs, 5.81 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 24 13:30:41 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:41 np0005533938 python3.9[138445]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 13:30:42 np0005533938 python3.9[138529]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:30:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:30:42 np0005533938 systemd[1]: session-40.scope: Deactivated successfully.
Nov 24 13:30:42 np0005533938 systemd[1]: session-40.scope: Consumed 23.279s CPU time.
Nov 24 13:30:42 np0005533938 systemd-logind[822]: Session 40 logged out. Waiting for processes to exit.
Nov 24 13:30:42 np0005533938 systemd-logind[822]: Removed session 40.
Nov 24 13:30:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:30:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:30:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:30:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:30:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:30:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:30:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:30:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:30:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:30:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:30:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:30:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:30:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:30:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:30:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:30:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:30:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:30:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:30:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:30:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:30:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:30:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:30:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:30:43 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:45 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:47 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:30:47 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:30:47 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 5482 writes, 23K keys, 5482 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5482 writes, 769 syncs, 7.13 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5482 writes, 23K keys, 5482 commit groups, 1.0 writes per commit group, ingest: 18.33 MB, 0.03 MB/s#012Interval WAL: 5482 writes, 769 syncs, 7.13 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 24 13:30:47 np0005533938 systemd-logind[822]: New session 41 of user zuul.
Nov 24 13:30:47 np0005533938 systemd[1]: Started Session 41 of User zuul.
Nov 24 13:30:48 np0005533938 python3.9[138711]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:30:49 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:49 np0005533938 python3.9[138863]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:30:50 np0005533938 python3.9[138941]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:30:50 np0005533938 systemd[1]: session-41.scope: Deactivated successfully.
Nov 24 13:30:50 np0005533938 systemd[1]: session-41.scope: Consumed 1.575s CPU time.
Nov 24 13:30:50 np0005533938 systemd-logind[822]: Session 41 logged out. Waiting for processes to exit.
Nov 24 13:30:50 np0005533938 systemd-logind[822]: Removed session 41.
Nov 24 13:30:50 np0005533938 ceph-mgr[75218]: [devicehealth INFO root] Check health
Nov 24 13:30:51 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:30:53 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:55 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:56 np0005533938 systemd-logind[822]: New session 42 of user zuul.
Nov 24 13:30:56 np0005533938 systemd[1]: Started Session 42 of User zuul.
Nov 24 13:30:56 np0005533938 python3.9[139144]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:30:57 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 24 13:30:57 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 13:30:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:30:57 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:30:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:30:57 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:30:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:30:57 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:30:57 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev d2b1027e-8718-432e-b316-1e77165b1930 does not exist
Nov 24 13:30:57 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 289e10f1-1f0c-4ab5-a5e7-3e61a4491c8e does not exist
Nov 24 13:30:57 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 8153771d-e66a-4ebb-89f5-4f1244fe6cc8 does not exist
Nov 24 13:30:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:30:57 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:30:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:30:57 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:30:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:30:57 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:30:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:30:57 np0005533938 podman[139546]: 2025-11-24 18:30:57.900068328 +0000 UTC m=+0.043686211 container create adbb712220616c2a40fdd2801f46cc0f168a7107f349e84018658227ffd08e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:30:57 np0005533938 systemd[1]: Started libpod-conmon-adbb712220616c2a40fdd2801f46cc0f168a7107f349e84018658227ffd08e2e.scope.
Nov 24 13:30:57 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:30:57 np0005533938 podman[139546]: 2025-11-24 18:30:57.877721401 +0000 UTC m=+0.021339314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:30:57 np0005533938 podman[139546]: 2025-11-24 18:30:57.976150307 +0000 UTC m=+0.119768210 container init adbb712220616c2a40fdd2801f46cc0f168a7107f349e84018658227ffd08e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 24 13:30:57 np0005533938 podman[139546]: 2025-11-24 18:30:57.982399013 +0000 UTC m=+0.126016896 container start adbb712220616c2a40fdd2801f46cc0f168a7107f349e84018658227ffd08e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:30:57 np0005533938 podman[139546]: 2025-11-24 18:30:57.985357017 +0000 UTC m=+0.128974920 container attach adbb712220616c2a40fdd2801f46cc0f168a7107f349e84018658227ffd08e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Nov 24 13:30:57 np0005533938 nostalgic_goldberg[139562]: 167 167
Nov 24 13:30:57 np0005533938 podman[139546]: 2025-11-24 18:30:57.987641124 +0000 UTC m=+0.131259007 container died adbb712220616c2a40fdd2801f46cc0f168a7107f349e84018658227ffd08e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:30:57 np0005533938 systemd[1]: libpod-adbb712220616c2a40fdd2801f46cc0f168a7107f349e84018658227ffd08e2e.scope: Deactivated successfully.
Nov 24 13:30:57 np0005533938 python3.9[139533]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:30:58 np0005533938 systemd[1]: var-lib-containers-storage-overlay-90b88c6011712b774490dbb6453be89635a5ef14700ee4423e1f8b83c267c9ab-merged.mount: Deactivated successfully.
Nov 24 13:30:58 np0005533938 podman[139546]: 2025-11-24 18:30:58.024816162 +0000 UTC m=+0.168434055 container remove adbb712220616c2a40fdd2801f46cc0f168a7107f349e84018658227ffd08e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:30:58 np0005533938 systemd[1]: libpod-conmon-adbb712220616c2a40fdd2801f46cc0f168a7107f349e84018658227ffd08e2e.scope: Deactivated successfully.
Nov 24 13:30:58 np0005533938 podman[139610]: 2025-11-24 18:30:58.162821967 +0000 UTC m=+0.038188985 container create f35951e018b651100e1becc31164c599530e0b436f1aca3c587a1d4a7a3b7022 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 13:30:58 np0005533938 systemd[1]: Started libpod-conmon-f35951e018b651100e1becc31164c599530e0b436f1aca3c587a1d4a7a3b7022.scope.
Nov 24 13:30:58 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:30:58 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce1d7e3f537a0ef0e58264ef8dc05bb95ac90c6c3fa15b7a4bed2d1cd196542c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:30:58 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce1d7e3f537a0ef0e58264ef8dc05bb95ac90c6c3fa15b7a4bed2d1cd196542c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:30:58 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce1d7e3f537a0ef0e58264ef8dc05bb95ac90c6c3fa15b7a4bed2d1cd196542c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:30:58 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce1d7e3f537a0ef0e58264ef8dc05bb95ac90c6c3fa15b7a4bed2d1cd196542c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:30:58 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce1d7e3f537a0ef0e58264ef8dc05bb95ac90c6c3fa15b7a4bed2d1cd196542c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:30:58 np0005533938 podman[139610]: 2025-11-24 18:30:58.14854435 +0000 UTC m=+0.023911378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:30:58 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 13:30:58 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:30:58 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:30:58 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:30:58 np0005533938 podman[139610]: 2025-11-24 18:30:58.725218664 +0000 UTC m=+0.600585702 container init f35951e018b651100e1becc31164c599530e0b436f1aca3c587a1d4a7a3b7022 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 13:30:58 np0005533938 podman[139610]: 2025-11-24 18:30:58.735756387 +0000 UTC m=+0.611123395 container start f35951e018b651100e1becc31164c599530e0b436f1aca3c587a1d4a7a3b7022 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:30:58 np0005533938 podman[139610]: 2025-11-24 18:30:58.738509735 +0000 UTC m=+0.613876743 container attach f35951e018b651100e1becc31164c599530e0b436f1aca3c587a1d4a7a3b7022 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jones, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 13:30:59 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:30:59 np0005533938 python3.9[139781]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:30:59 np0005533938 python3.9[139872]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.i74xqfct recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:30:59 np0005533938 focused_jones[139666]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:30:59 np0005533938 focused_jones[139666]: --> relative data size: 1.0
Nov 24 13:30:59 np0005533938 focused_jones[139666]: --> All data devices are unavailable
Nov 24 13:30:59 np0005533938 systemd[1]: libpod-f35951e018b651100e1becc31164c599530e0b436f1aca3c587a1d4a7a3b7022.scope: Deactivated successfully.
Nov 24 13:30:59 np0005533938 systemd[1]: libpod-f35951e018b651100e1becc31164c599530e0b436f1aca3c587a1d4a7a3b7022.scope: Consumed 1.078s CPU time.
Nov 24 13:30:59 np0005533938 podman[139610]: 2025-11-24 18:30:59.870298144 +0000 UTC m=+1.745665152 container died f35951e018b651100e1becc31164c599530e0b436f1aca3c587a1d4a7a3b7022 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 13:30:59 np0005533938 systemd[1]: var-lib-containers-storage-overlay-ce1d7e3f537a0ef0e58264ef8dc05bb95ac90c6c3fa15b7a4bed2d1cd196542c-merged.mount: Deactivated successfully.
Nov 24 13:30:59 np0005533938 podman[139610]: 2025-11-24 18:30:59.931982174 +0000 UTC m=+1.807349182 container remove f35951e018b651100e1becc31164c599530e0b436f1aca3c587a1d4a7a3b7022 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:30:59 np0005533938 systemd[1]: libpod-conmon-f35951e018b651100e1becc31164c599530e0b436f1aca3c587a1d4a7a3b7022.scope: Deactivated successfully.
Nov 24 13:31:00 np0005533938 podman[140187]: 2025-11-24 18:31:00.557019745 +0000 UTC m=+0.043917037 container create a17a5c225cb80a5b2f66d5551e5fbcc4fd7461df4f85cd47d7797b19211a245d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wing, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:31:00 np0005533938 systemd[1]: Started libpod-conmon-a17a5c225cb80a5b2f66d5551e5fbcc4fd7461df4f85cd47d7797b19211a245d.scope.
Nov 24 13:31:00 np0005533938 python3.9[140172]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:31:00 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:31:00 np0005533938 podman[140187]: 2025-11-24 18:31:00.536453152 +0000 UTC m=+0.023350474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:31:00 np0005533938 podman[140187]: 2025-11-24 18:31:00.644586171 +0000 UTC m=+0.131483503 container init a17a5c225cb80a5b2f66d5551e5fbcc4fd7461df4f85cd47d7797b19211a245d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 13:31:00 np0005533938 podman[140187]: 2025-11-24 18:31:00.652364825 +0000 UTC m=+0.139262117 container start a17a5c225cb80a5b2f66d5551e5fbcc4fd7461df4f85cd47d7797b19211a245d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wing, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:31:00 np0005533938 podman[140187]: 2025-11-24 18:31:00.656865917 +0000 UTC m=+0.143763209 container attach a17a5c225cb80a5b2f66d5551e5fbcc4fd7461df4f85cd47d7797b19211a245d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 13:31:00 np0005533938 distracted_wing[140204]: 167 167
Nov 24 13:31:00 np0005533938 systemd[1]: libpod-a17a5c225cb80a5b2f66d5551e5fbcc4fd7461df4f85cd47d7797b19211a245d.scope: Deactivated successfully.
Nov 24 13:31:00 np0005533938 podman[140187]: 2025-11-24 18:31:00.660725534 +0000 UTC m=+0.147622826 container died a17a5c225cb80a5b2f66d5551e5fbcc4fd7461df4f85cd47d7797b19211a245d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wing, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:31:00 np0005533938 systemd[1]: var-lib-containers-storage-overlay-37686dea258588001ae4378d5ac6c5c49d563ae711dbff5d8db09f949af1994e-merged.mount: Deactivated successfully.
Nov 24 13:31:00 np0005533938 podman[140187]: 2025-11-24 18:31:00.696079026 +0000 UTC m=+0.182976308 container remove a17a5c225cb80a5b2f66d5551e5fbcc4fd7461df4f85cd47d7797b19211a245d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 13:31:00 np0005533938 systemd[1]: libpod-conmon-a17a5c225cb80a5b2f66d5551e5fbcc4fd7461df4f85cd47d7797b19211a245d.scope: Deactivated successfully.
Nov 24 13:31:00 np0005533938 podman[140251]: 2025-11-24 18:31:00.898154398 +0000 UTC m=+0.065436404 container create 4bb1a1a5937cde129c18dbcb6f38645e218251903d7d1667444739e716ceedad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hoover, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:31:00 np0005533938 systemd[1]: Started libpod-conmon-4bb1a1a5937cde129c18dbcb6f38645e218251903d7d1667444739e716ceedad.scope.
Nov 24 13:31:00 np0005533938 podman[140251]: 2025-11-24 18:31:00.865798081 +0000 UTC m=+0.033080147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:31:00 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:31:00 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d262533aff7538113ede6d5bf0122027e74f5ec89768910bdab2c5ff6e41b6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:31:00 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d262533aff7538113ede6d5bf0122027e74f5ec89768910bdab2c5ff6e41b6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:31:00 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d262533aff7538113ede6d5bf0122027e74f5ec89768910bdab2c5ff6e41b6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:31:00 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d262533aff7538113ede6d5bf0122027e74f5ec89768910bdab2c5ff6e41b6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:31:01 np0005533938 podman[140251]: 2025-11-24 18:31:01.015368144 +0000 UTC m=+0.182650140 container init 4bb1a1a5937cde129c18dbcb6f38645e218251903d7d1667444739e716ceedad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hoover, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 13:31:01 np0005533938 podman[140251]: 2025-11-24 18:31:01.028188564 +0000 UTC m=+0.195470570 container start 4bb1a1a5937cde129c18dbcb6f38645e218251903d7d1667444739e716ceedad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hoover, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:31:01 np0005533938 podman[140251]: 2025-11-24 18:31:01.032663385 +0000 UTC m=+0.199945441 container attach 4bb1a1a5937cde129c18dbcb6f38645e218251903d7d1667444739e716ceedad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 13:31:01 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:01 np0005533938 python3.9[140325]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=._98jmdwk recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]: {
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:    "0": [
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:        {
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "devices": [
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "/dev/loop3"
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            ],
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "lv_name": "ceph_lv0",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "lv_size": "21470642176",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "name": "ceph_lv0",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "tags": {
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.cluster_name": "ceph",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.crush_device_class": "",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.encrypted": "0",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.osd_id": "0",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.type": "block",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.vdo": "0"
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            },
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "type": "block",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "vg_name": "ceph_vg0"
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:        }
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:    ],
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:    "1": [
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:        {
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "devices": [
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "/dev/loop4"
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            ],
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "lv_name": "ceph_lv1",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "lv_size": "21470642176",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "name": "ceph_lv1",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "tags": {
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.cluster_name": "ceph",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.crush_device_class": "",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.encrypted": "0",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.osd_id": "1",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.type": "block",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.vdo": "0"
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            },
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "type": "block",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "vg_name": "ceph_vg1"
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:        }
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:    ],
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:    "2": [
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:        {
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "devices": [
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "/dev/loop5"
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            ],
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "lv_name": "ceph_lv2",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "lv_size": "21470642176",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "name": "ceph_lv2",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "tags": {
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.cluster_name": "ceph",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.crush_device_class": "",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.encrypted": "0",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.osd_id": "2",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.type": "block",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:                "ceph.vdo": "0"
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            },
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "type": "block",
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:            "vg_name": "ceph_vg2"
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:        }
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]:    ]
Nov 24 13:31:01 np0005533938 frosty_hoover[140292]: }
Nov 24 13:31:01 np0005533938 systemd[1]: libpod-4bb1a1a5937cde129c18dbcb6f38645e218251903d7d1667444739e716ceedad.scope: Deactivated successfully.
Nov 24 13:31:01 np0005533938 podman[140251]: 2025-11-24 18:31:01.833234828 +0000 UTC m=+1.000516794 container died 4bb1a1a5937cde129c18dbcb6f38645e218251903d7d1667444739e716ceedad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hoover, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 13:31:01 np0005533938 systemd[1]: var-lib-containers-storage-overlay-7d262533aff7538113ede6d5bf0122027e74f5ec89768910bdab2c5ff6e41b6d-merged.mount: Deactivated successfully.
Nov 24 13:31:01 np0005533938 podman[140251]: 2025-11-24 18:31:01.89979843 +0000 UTC m=+1.067080416 container remove 4bb1a1a5937cde129c18dbcb6f38645e218251903d7d1667444739e716ceedad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hoover, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:31:01 np0005533938 systemd[1]: libpod-conmon-4bb1a1a5937cde129c18dbcb6f38645e218251903d7d1667444739e716ceedad.scope: Deactivated successfully.
Nov 24 13:31:01 np0005533938 python3.9[140481]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:31:02 np0005533938 podman[140783]: 2025-11-24 18:31:02.538206145 +0000 UTC m=+0.038434591 container create 83ec3da4050d90080e5d154dd89a1e0cb9f4fe62c7f7ea51ec67cd23c0202b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 13:31:02 np0005533938 systemd[1]: Started libpod-conmon-83ec3da4050d90080e5d154dd89a1e0cb9f4fe62c7f7ea51ec67cd23c0202b39.scope.
Nov 24 13:31:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:31:02 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:31:02 np0005533938 podman[140783]: 2025-11-24 18:31:02.603395162 +0000 UTC m=+0.103623658 container init 83ec3da4050d90080e5d154dd89a1e0cb9f4fe62c7f7ea51ec67cd23c0202b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cohen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 13:31:02 np0005533938 podman[140783]: 2025-11-24 18:31:02.611056713 +0000 UTC m=+0.111285169 container start 83ec3da4050d90080e5d154dd89a1e0cb9f4fe62c7f7ea51ec67cd23c0202b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cohen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 13:31:02 np0005533938 podman[140783]: 2025-11-24 18:31:02.614303514 +0000 UTC m=+0.114531960 container attach 83ec3da4050d90080e5d154dd89a1e0cb9f4fe62c7f7ea51ec67cd23c0202b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cohen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 13:31:02 np0005533938 condescending_cohen[140802]: 167 167
Nov 24 13:31:02 np0005533938 podman[140783]: 2025-11-24 18:31:02.616887298 +0000 UTC m=+0.117115744 container died 83ec3da4050d90080e5d154dd89a1e0cb9f4fe62c7f7ea51ec67cd23c0202b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cohen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 13:31:02 np0005533938 systemd[1]: libpod-83ec3da4050d90080e5d154dd89a1e0cb9f4fe62c7f7ea51ec67cd23c0202b39.scope: Deactivated successfully.
Nov 24 13:31:02 np0005533938 podman[140783]: 2025-11-24 18:31:02.524070702 +0000 UTC m=+0.024299148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:31:02 np0005533938 systemd[1]: var-lib-containers-storage-overlay-556f34b91357e37d3ab2601bf39ea4f366557d39c408173cafb9f472636f6554-merged.mount: Deactivated successfully.
Nov 24 13:31:02 np0005533938 podman[140783]: 2025-11-24 18:31:02.652326143 +0000 UTC m=+0.152554609 container remove 83ec3da4050d90080e5d154dd89a1e0cb9f4fe62c7f7ea51ec67cd23c0202b39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cohen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:31:02 np0005533938 systemd[1]: libpod-conmon-83ec3da4050d90080e5d154dd89a1e0cb9f4fe62c7f7ea51ec67cd23c0202b39.scope: Deactivated successfully.
Nov 24 13:31:02 np0005533938 python3.9[140785]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:31:02 np0005533938 podman[140851]: 2025-11-24 18:31:02.843070134 +0000 UTC m=+0.060890601 container create 80af0449cd79304e415a2e4ad542304ce45fce7b71dd99c7b1ad7e5ff282886f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elgamal, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:31:02 np0005533938 systemd[1]: Started libpod-conmon-80af0449cd79304e415a2e4ad542304ce45fce7b71dd99c7b1ad7e5ff282886f.scope.
Nov 24 13:31:02 np0005533938 podman[140851]: 2025-11-24 18:31:02.821475945 +0000 UTC m=+0.039296452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:31:02 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:31:02 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/966f9ac3c9f237f8af43dcb4ab0086daf738a0fa7dbee13ee2d4089392322108/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:31:02 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/966f9ac3c9f237f8af43dcb4ab0086daf738a0fa7dbee13ee2d4089392322108/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:31:02 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/966f9ac3c9f237f8af43dcb4ab0086daf738a0fa7dbee13ee2d4089392322108/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:31:02 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/966f9ac3c9f237f8af43dcb4ab0086daf738a0fa7dbee13ee2d4089392322108/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:31:02 np0005533938 podman[140851]: 2025-11-24 18:31:02.939256235 +0000 UTC m=+0.157076732 container init 80af0449cd79304e415a2e4ad542304ce45fce7b71dd99c7b1ad7e5ff282886f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elgamal, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:31:02 np0005533938 podman[140851]: 2025-11-24 18:31:02.947101791 +0000 UTC m=+0.164922248 container start 80af0449cd79304e415a2e4ad542304ce45fce7b71dd99c7b1ad7e5ff282886f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 13:31:02 np0005533938 podman[140851]: 2025-11-24 18:31:02.949994483 +0000 UTC m=+0.167814940 container attach 80af0449cd79304e415a2e4ad542304ce45fce7b71dd99c7b1ad7e5ff282886f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 13:31:03 np0005533938 python3.9[140923]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:31:03 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:03 np0005533938 python3.9[141081]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:31:03 np0005533938 upbeat_elgamal[140912]: {
Nov 24 13:31:03 np0005533938 upbeat_elgamal[140912]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:31:03 np0005533938 upbeat_elgamal[140912]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:31:03 np0005533938 upbeat_elgamal[140912]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:31:03 np0005533938 upbeat_elgamal[140912]:        "osd_id": 0,
Nov 24 13:31:03 np0005533938 upbeat_elgamal[140912]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:31:03 np0005533938 upbeat_elgamal[140912]:        "type": "bluestore"
Nov 24 13:31:03 np0005533938 upbeat_elgamal[140912]:    },
Nov 24 13:31:03 np0005533938 upbeat_elgamal[140912]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:31:03 np0005533938 upbeat_elgamal[140912]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:31:03 np0005533938 upbeat_elgamal[140912]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:31:03 np0005533938 upbeat_elgamal[140912]:        "osd_id": 1,
Nov 24 13:31:03 np0005533938 upbeat_elgamal[140912]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:31:03 np0005533938 upbeat_elgamal[140912]:        "type": "bluestore"
Nov 24 13:31:03 np0005533938 upbeat_elgamal[140912]:    },
Nov 24 13:31:03 np0005533938 upbeat_elgamal[140912]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:31:03 np0005533938 upbeat_elgamal[140912]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:31:03 np0005533938 upbeat_elgamal[140912]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:31:03 np0005533938 upbeat_elgamal[140912]:        "osd_id": 2,
Nov 24 13:31:03 np0005533938 upbeat_elgamal[140912]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:31:03 np0005533938 upbeat_elgamal[140912]:        "type": "bluestore"
Nov 24 13:31:03 np0005533938 upbeat_elgamal[140912]:    }
Nov 24 13:31:03 np0005533938 upbeat_elgamal[140912]: }
Nov 24 13:31:03 np0005533938 systemd[1]: libpod-80af0449cd79304e415a2e4ad542304ce45fce7b71dd99c7b1ad7e5ff282886f.scope: Deactivated successfully.
Nov 24 13:31:03 np0005533938 podman[140851]: 2025-11-24 18:31:03.888238191 +0000 UTC m=+1.106058648 container died 80af0449cd79304e415a2e4ad542304ce45fce7b71dd99c7b1ad7e5ff282886f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 13:31:03 np0005533938 systemd[1]: var-lib-containers-storage-overlay-966f9ac3c9f237f8af43dcb4ab0086daf738a0fa7dbee13ee2d4089392322108-merged.mount: Deactivated successfully.
Nov 24 13:31:04 np0005533938 podman[140851]: 2025-11-24 18:31:04.053579318 +0000 UTC m=+1.271399765 container remove 80af0449cd79304e415a2e4ad542304ce45fce7b71dd99c7b1ad7e5ff282886f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:31:04 np0005533938 systemd[1]: libpod-conmon-80af0449cd79304e415a2e4ad542304ce45fce7b71dd99c7b1ad7e5ff282886f.scope: Deactivated successfully.
Nov 24 13:31:04 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:31:04 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:31:04 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:31:04 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:31:04 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 3350fb55-2a0b-44ee-8516-716208011328 does not exist
Nov 24 13:31:04 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 53e0ba9e-374f-4eb9-9dd4-152504a62ee9 does not exist
Nov 24 13:31:04 np0005533938 python3.9[141197]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:31:04 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:31:04 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:31:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:31:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:31:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:31:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:31:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:31:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:31:04 np0005533938 python3.9[141399]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:31:05 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:05 np0005533938 python3.9[141551]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:31:06 np0005533938 python3.9[141629]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:31:06 np0005533938 python3.9[141781]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:31:07 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:07 np0005533938 python3.9[141859]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:31:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:31:08 np0005533938 python3.9[142011]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:31:08 np0005533938 systemd[1]: Reloading.
Nov 24 13:31:08 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:31:08 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:31:09 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:09 np0005533938 python3.9[142199]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:31:09 np0005533938 python3.9[142277]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:31:10 np0005533938 python3.9[142429]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:31:10 np0005533938 python3.9[142507]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:31:11 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:11 np0005533938 python3.9[142659]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:31:11 np0005533938 systemd[1]: Reloading.
Nov 24 13:31:11 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:31:11 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:31:11 np0005533938 systemd[1]: Starting Create netns directory...
Nov 24 13:31:11 np0005533938 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 13:31:11 np0005533938 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 13:31:11 np0005533938 systemd[1]: Finished Create netns directory.
Nov 24 13:31:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:31:12 np0005533938 python3.9[142851]: ansible-ansible.builtin.service_facts Invoked
Nov 24 13:31:12 np0005533938 network[142868]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 13:31:12 np0005533938 network[142869]: 'network-scripts' will be removed from distribution in near future.
Nov 24 13:31:12 np0005533938 network[142870]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 13:31:13 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:15 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:16 np0005533938 python3.9[143132]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:31:16 np0005533938 python3.9[143210]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:31:17 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:31:17 np0005533938 python3.9[143362]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:31:18 np0005533938 python3.9[143514]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:31:18 np0005533938 python3.9[143592]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:31:19 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:19 np0005533938 python3.9[143744]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 24 13:31:19 np0005533938 systemd[1]: Starting Time & Date Service...
Nov 24 13:31:19 np0005533938 systemd[1]: Started Time & Date Service.
Nov 24 13:31:20 np0005533938 python3.9[143900]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:31:21 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:21 np0005533938 python3.9[144052]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:31:21 np0005533938 python3.9[144130]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:31:22 np0005533938 python3.9[144282]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:31:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:31:22 np0005533938 python3.9[144360]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ggb8ahrg recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:31:23 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:23 np0005533938 python3.9[144512]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:31:23 np0005533938 python3.9[144590]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:31:24 np0005533938 python3.9[144742]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:31:25 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:25 np0005533938 python3[144895]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 13:31:26 np0005533938 python3.9[145047]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:31:26 np0005533938 python3.9[145125]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:31:27 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:27 np0005533938 python3.9[145277]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:31:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:31:27 np0005533938 python3.9[145355]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:31:28 np0005533938 python3.9[145507]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:31:29 np0005533938 python3.9[145585]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:31:29 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:29 np0005533938 python3.9[145737]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:31:30 np0005533938 python3.9[145815]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:31:31 np0005533938 python3.9[145967]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:31:31 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:31 np0005533938 python3.9[146045]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:31:32 np0005533938 python3.9[146197]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:31:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:31:33 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:33 np0005533938 python3.9[146352]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:31:34 np0005533938 python3.9[146504]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:31:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:31:34
Nov 24 13:31:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:31:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:31:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['vms', '.mgr', 'volumes', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log']
Nov 24 13:31:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:31:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:31:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:31:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:31:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:31:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:31:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:31:34 np0005533938 python3.9[146656]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:31:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:31:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:31:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:31:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:31:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:31:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:31:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:31:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:31:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:31:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:31:35 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:35 np0005533938 python3.9[146808]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 13:31:36 np0005533938 python3.9[146960]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 24 13:31:36 np0005533938 systemd[1]: session-42.scope: Deactivated successfully.
Nov 24 13:31:36 np0005533938 systemd[1]: session-42.scope: Consumed 28.884s CPU time.
Nov 24 13:31:36 np0005533938 systemd-logind[822]: Session 42 logged out. Waiting for processes to exit.
Nov 24 13:31:36 np0005533938 systemd-logind[822]: Removed session 42.
Nov 24 13:31:37 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:31:39 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:41 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:42 np0005533938 systemd-logind[822]: New session 43 of user zuul.
Nov 24 13:31:42 np0005533938 systemd[1]: Started Session 43 of User zuul.
Nov 24 13:31:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:31:43 np0005533938 python3.9[147140]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 24 13:31:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:31:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:31:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:31:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:31:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:31:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:31:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:31:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:31:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:31:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:31:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:31:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:31:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:31:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:31:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:31:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:31:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:31:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:31:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:31:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:31:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:31:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:31:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:31:43 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:43 np0005533938 python3.9[147292]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:31:44 np0005533938 python3.9[147447]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 24 13:31:45 np0005533938 python3.9[147599]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.3up6pgcf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:31:45 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:45 np0005533938 python3.9[147724]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.3up6pgcf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009104.7344763-44-148762557414672/.source.3up6pgcf _original_basename=.h7unejvv follow=False checksum=c8681bd5f60cfe8e414de701936dcfa8bc77df8f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:31:46 np0005533938 python3.9[147876]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:31:47 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:31:47 np0005533938 python3.9[148028]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhS8frVtJkphIV3qjYEBaOrfFAUD1SVRr7LLCHE4Oz5qMeQHKYm90YB9nO7ntC/BIXenfYoTm6fYVn1JaiGoGSQdRBXPQG/o6WD6Ec3pD/Mcl/KMJGYuMHxaEizMQ3wOpo20hOTbEsu6v2y+3ETjeAG0UF9fWh/vCDy6bX0hMh8o7mf9skIV8gvWuCbJo4Vk92qBh7z9qccV5j5J5maU9c28+VEF1nlN0GSyYT/IRFdD7gDE7QFZ9QpapaWGSFE7nCTgz4Mw4nnJ+KaxvkxxHf4knCpDxk59+uk/+9G8oUiFokkDbJiPI6sZS+BALztR/CzJpNrAYaYmhzjbSRYb51wPj5EnXYzqgik4JzhmsqsepLD79RGK2b4ZWnQVP7WFOUL+Wm4+MkbF0LVmcy1XJeA5yhmhodU+fpO1t1SZRONc1eqep1NVqxMOHXOQgKGpIAg95Vpx9szp5NhOkzp1cQTeEhxfog0RyENmd9NxKBpu3NmtFN+dETuLT2Co1JMhM=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA2lZlyCN0FJ/jD1EDSdkabXa5aE54G6xn7+v3fPL+BD#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHFHJ7xweyewLWbij/U6h4iEFO2zmE+OAqJetXAaVahyXo6KOKB5z+dQ1ItOa9RPE9AAjyAVton3sCrkTSjqY88=#012 create=True mode=0644 path=/tmp/ansible.3up6pgcf state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:31:48 np0005533938 python3.9[148180]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.3up6pgcf' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:31:49 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:49 np0005533938 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 24 13:31:49 np0005533938 python3.9[148334]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.3up6pgcf state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:31:50 np0005533938 systemd[1]: session-43.scope: Deactivated successfully.
Nov 24 13:31:50 np0005533938 systemd[1]: session-43.scope: Consumed 5.044s CPU time.
Nov 24 13:31:50 np0005533938 systemd-logind[822]: Session 43 logged out. Waiting for processes to exit.
Nov 24 13:31:50 np0005533938 systemd-logind[822]: Removed session 43.
Nov 24 13:31:51 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:31:53 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:55 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:55 np0005533938 systemd-logind[822]: New session 44 of user zuul.
Nov 24 13:31:55 np0005533938 systemd[1]: Started Session 44 of User zuul.
Nov 24 13:31:56 np0005533938 python3.9[148514]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:31:57 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:31:57 np0005533938 python3.9[148670]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 24 13:31:58 np0005533938 python3.9[148824]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 13:31:59 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:31:59 np0005533938 python3.9[148977]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:32:00 np0005533938 python3.9[149130]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:32:01 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:32:01 np0005533938 python3.9[149282]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:32:01 np0005533938 systemd[1]: session-44.scope: Deactivated successfully.
Nov 24 13:32:01 np0005533938 systemd[1]: session-44.scope: Consumed 3.748s CPU time.
Nov 24 13:32:01 np0005533938 systemd-logind[822]: Session 44 logged out. Waiting for processes to exit.
Nov 24 13:32:01 np0005533938 systemd-logind[822]: Removed session 44.
Nov 24 13:32:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:32:03 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:32:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:32:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:32:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:32:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:32:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:32:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:32:04 np0005533938 podman[149479]: 2025-11-24 18:32:04.895259674 +0000 UTC m=+0.067768704 container exec 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 13:32:04 np0005533938 podman[149479]: 2025-11-24 18:32:04.980891708 +0000 UTC m=+0.153400718 container exec_died 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 13:32:05 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 13:32:05 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:32:05 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:32:05 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:32:05 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:32:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:32:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:32:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:32:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:32:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:32:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:32:06 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 65954c29-d63d-4cea-b757-9e87f020651d does not exist
Nov 24 13:32:06 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev f8f4b38e-dade-4c4b-be85-4bfeaaf07911 does not exist
Nov 24 13:32:06 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev a454fb02-3cf5-4a11-b222-fc95716b94a3 does not exist
Nov 24 13:32:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:32:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:32:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:32:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:32:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:32:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:32:06 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:32:06 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:32:06 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:32:06 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:32:06 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:32:06 np0005533938 systemd-logind[822]: New session 45 of user zuul.
Nov 24 13:32:06 np0005533938 systemd[1]: Started Session 45 of User zuul.
Nov 24 13:32:06 np0005533938 podman[149965]: 2025-11-24 18:32:06.894203959 +0000 UTC m=+0.042389032 container create e3a101a9ff9114ae5567d6ff4a04d04ecc2660bd6018c04001b380dc1ae03976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sinoussi, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 13:32:06 np0005533938 systemd[1]: Started libpod-conmon-e3a101a9ff9114ae5567d6ff4a04d04ecc2660bd6018c04001b380dc1ae03976.scope.
Nov 24 13:32:06 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:32:06 np0005533938 podman[149965]: 2025-11-24 18:32:06.873500566 +0000 UTC m=+0.021685709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:32:06 np0005533938 podman[149965]: 2025-11-24 18:32:06.977596515 +0000 UTC m=+0.125781608 container init e3a101a9ff9114ae5567d6ff4a04d04ecc2660bd6018c04001b380dc1ae03976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sinoussi, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 13:32:06 np0005533938 podman[149965]: 2025-11-24 18:32:06.984303607 +0000 UTC m=+0.132488680 container start e3a101a9ff9114ae5567d6ff4a04d04ecc2660bd6018c04001b380dc1ae03976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sinoussi, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:32:06 np0005533938 podman[149965]: 2025-11-24 18:32:06.987418998 +0000 UTC m=+0.135604121 container attach e3a101a9ff9114ae5567d6ff4a04d04ecc2660bd6018c04001b380dc1ae03976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:32:06 np0005533938 bold_sinoussi[149982]: 167 167
Nov 24 13:32:06 np0005533938 systemd[1]: libpod-e3a101a9ff9114ae5567d6ff4a04d04ecc2660bd6018c04001b380dc1ae03976.scope: Deactivated successfully.
Nov 24 13:32:06 np0005533938 conmon[149982]: conmon e3a101a9ff9114ae5567 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e3a101a9ff9114ae5567d6ff4a04d04ecc2660bd6018c04001b380dc1ae03976.scope/container/memory.events
Nov 24 13:32:06 np0005533938 podman[149965]: 2025-11-24 18:32:06.99061569 +0000 UTC m=+0.138800773 container died e3a101a9ff9114ae5567d6ff4a04d04ecc2660bd6018c04001b380dc1ae03976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:32:07 np0005533938 systemd[1]: var-lib-containers-storage-overlay-bff906e1ca594529c2f5f1bc8ff083c531caa535da868a9861c55aba79b210f8-merged.mount: Deactivated successfully.
Nov 24 13:32:07 np0005533938 podman[149965]: 2025-11-24 18:32:07.033075022 +0000 UTC m=+0.181260105 container remove e3a101a9ff9114ae5567d6ff4a04d04ecc2660bd6018c04001b380dc1ae03976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:32:07 np0005533938 systemd[1]: libpod-conmon-e3a101a9ff9114ae5567d6ff4a04d04ecc2660bd6018c04001b380dc1ae03976.scope: Deactivated successfully.
Nov 24 13:32:07 np0005533938 podman[150007]: 2025-11-24 18:32:07.182733193 +0000 UTC m=+0.042965076 container create 3713f05f57bff28f136b987b49727054cfa2efd9d668dcc95c975648d59e3f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_saha, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 13:32:07 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 13:32:07 np0005533938 systemd[1]: Started libpod-conmon-3713f05f57bff28f136b987b49727054cfa2efd9d668dcc95c975648d59e3f50.scope.
Nov 24 13:32:07 np0005533938 podman[150007]: 2025-11-24 18:32:07.162466472 +0000 UTC m=+0.022698435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:32:07 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:32:07 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a5891960ed5ccfc0240d643f1c8bc2988b4f016e19472c935b8f147a38c5dd2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:32:07 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a5891960ed5ccfc0240d643f1c8bc2988b4f016e19472c935b8f147a38c5dd2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:32:07 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a5891960ed5ccfc0240d643f1c8bc2988b4f016e19472c935b8f147a38c5dd2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:32:07 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a5891960ed5ccfc0240d643f1c8bc2988b4f016e19472c935b8f147a38c5dd2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:32:07 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a5891960ed5ccfc0240d643f1c8bc2988b4f016e19472c935b8f147a38c5dd2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:32:07 np0005533938 podman[150007]: 2025-11-24 18:32:07.27588945 +0000 UTC m=+0.136121333 container init 3713f05f57bff28f136b987b49727054cfa2efd9d668dcc95c975648d59e3f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 24 13:32:07 np0005533938 podman[150007]: 2025-11-24 18:32:07.284731728 +0000 UTC m=+0.144963591 container start 3713f05f57bff28f136b987b49727054cfa2efd9d668dcc95c975648d59e3f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_saha, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:32:07 np0005533938 podman[150007]: 2025-11-24 18:32:07.287892469 +0000 UTC m=+0.148124352 container attach 3713f05f57bff28f136b987b49727054cfa2efd9d668dcc95c975648d59e3f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_saha, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:32:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:32:07 np0005533938 python3.9[150125]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:32:08 np0005533938 distracted_saha[150041]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:32:08 np0005533938 distracted_saha[150041]: --> relative data size: 1.0
Nov 24 13:32:08 np0005533938 distracted_saha[150041]: --> All data devices are unavailable
Nov 24 13:32:08 np0005533938 systemd[1]: libpod-3713f05f57bff28f136b987b49727054cfa2efd9d668dcc95c975648d59e3f50.scope: Deactivated successfully.
Nov 24 13:32:08 np0005533938 systemd[1]: libpod-3713f05f57bff28f136b987b49727054cfa2efd9d668dcc95c975648d59e3f50.scope: Consumed 1.009s CPU time.
Nov 24 13:32:08 np0005533938 podman[150007]: 2025-11-24 18:32:08.360253562 +0000 UTC m=+1.220485445 container died 3713f05f57bff28f136b987b49727054cfa2efd9d668dcc95c975648d59e3f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 13:32:08 np0005533938 systemd[1]: var-lib-containers-storage-overlay-4a5891960ed5ccfc0240d643f1c8bc2988b4f016e19472c935b8f147a38c5dd2-merged.mount: Deactivated successfully.
Nov 24 13:32:08 np0005533938 podman[150007]: 2025-11-24 18:32:08.411258085 +0000 UTC m=+1.271489958 container remove 3713f05f57bff28f136b987b49727054cfa2efd9d668dcc95c975648d59e3f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_saha, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 24 13:32:08 np0005533938 systemd[1]: libpod-conmon-3713f05f57bff28f136b987b49727054cfa2efd9d668dcc95c975648d59e3f50.scope: Deactivated successfully.
Nov 24 13:32:08 np0005533938 python3.9[150341]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 13:32:08 np0005533938 podman[150462]: 2025-11-24 18:32:08.962371265 +0000 UTC m=+0.034124649 container create 51c99b9ecce86ecf7e8334eed4b4bb0bf8d9c3b0cf9fee63ebd5f91b869bf978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 13:32:08 np0005533938 systemd[1]: Started libpod-conmon-51c99b9ecce86ecf7e8334eed4b4bb0bf8d9c3b0cf9fee63ebd5f91b869bf978.scope.
Nov 24 13:32:09 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:32:09 np0005533938 podman[150462]: 2025-11-24 18:32:09.03135907 +0000 UTC m=+0.103112474 container init 51c99b9ecce86ecf7e8334eed4b4bb0bf8d9c3b0cf9fee63ebd5f91b869bf978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_banzai, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 13:32:09 np0005533938 podman[150462]: 2025-11-24 18:32:09.036957784 +0000 UTC m=+0.108711168 container start 51c99b9ecce86ecf7e8334eed4b4bb0bf8d9c3b0cf9fee63ebd5f91b869bf978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_banzai, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 13:32:09 np0005533938 podman[150462]: 2025-11-24 18:32:09.03953498 +0000 UTC m=+0.111288384 container attach 51c99b9ecce86ecf7e8334eed4b4bb0bf8d9c3b0cf9fee63ebd5f91b869bf978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_banzai, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:32:09 np0005533938 upbeat_banzai[150481]: 167 167
Nov 24 13:32:09 np0005533938 systemd[1]: libpod-51c99b9ecce86ecf7e8334eed4b4bb0bf8d9c3b0cf9fee63ebd5f91b869bf978.scope: Deactivated successfully.
Nov 24 13:32:09 np0005533938 podman[150462]: 2025-11-24 18:32:09.040784952 +0000 UTC m=+0.112538336 container died 51c99b9ecce86ecf7e8334eed4b4bb0bf8d9c3b0cf9fee63ebd5f91b869bf978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_banzai, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:32:09 np0005533938 podman[150462]: 2025-11-24 18:32:08.948465707 +0000 UTC m=+0.020219121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:32:09 np0005533938 systemd[1]: var-lib-containers-storage-overlay-30a093f55d6e4f2269e043db37f214476b9df1d83af476eda3a3ac260bb632ea-merged.mount: Deactivated successfully.
Nov 24 13:32:09 np0005533938 podman[150462]: 2025-11-24 18:32:09.073163366 +0000 UTC m=+0.144916740 container remove 51c99b9ecce86ecf7e8334eed4b4bb0bf8d9c3b0cf9fee63ebd5f91b869bf978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 13:32:09 np0005533938 systemd[1]: libpod-conmon-51c99b9ecce86ecf7e8334eed4b4bb0bf8d9c3b0cf9fee63ebd5f91b869bf978.scope: Deactivated successfully.
Nov 24 13:32:09 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 13:32:09 np0005533938 podman[150506]: 2025-11-24 18:32:09.213365663 +0000 UTC m=+0.034777006 container create e016556bf05c3f74b7a9a18ff6a20080a3eb359be381a7607c3b57ef0467fdca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:32:09 np0005533938 systemd[1]: Started libpod-conmon-e016556bf05c3f74b7a9a18ff6a20080a3eb359be381a7607c3b57ef0467fdca.scope.
Nov 24 13:32:09 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:32:09 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/729504113eb1c468da14c3d70930e90fe48d1f274b842d0bd495a5c290bfef91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:32:09 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/729504113eb1c468da14c3d70930e90fe48d1f274b842d0bd495a5c290bfef91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:32:09 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/729504113eb1c468da14c3d70930e90fe48d1f274b842d0bd495a5c290bfef91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:32:09 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/729504113eb1c468da14c3d70930e90fe48d1f274b842d0bd495a5c290bfef91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:32:09 np0005533938 podman[150506]: 2025-11-24 18:32:09.278788047 +0000 UTC m=+0.100199410 container init e016556bf05c3f74b7a9a18ff6a20080a3eb359be381a7607c3b57ef0467fdca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:32:09 np0005533938 podman[150506]: 2025-11-24 18:32:09.284827442 +0000 UTC m=+0.106238785 container start e016556bf05c3f74b7a9a18ff6a20080a3eb359be381a7607c3b57ef0467fdca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 13:32:09 np0005533938 podman[150506]: 2025-11-24 18:32:09.288286001 +0000 UTC m=+0.109697344 container attach e016556bf05c3f74b7a9a18ff6a20080a3eb359be381a7607c3b57ef0467fdca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:32:09 np0005533938 podman[150506]: 2025-11-24 18:32:09.198245444 +0000 UTC m=+0.019656817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:32:09 np0005533938 python3.9[150602]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 24 13:32:09 np0005533938 exciting_khorana[150522]: {
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:    "0": [
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:        {
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "devices": [
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "/dev/loop3"
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            ],
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "lv_name": "ceph_lv0",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "lv_size": "21470642176",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "name": "ceph_lv0",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "tags": {
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.cluster_name": "ceph",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.crush_device_class": "",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.encrypted": "0",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.osd_id": "0",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.type": "block",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.vdo": "0"
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            },
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "type": "block",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "vg_name": "ceph_vg0"
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:        }
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:    ],
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:    "1": [
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:        {
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "devices": [
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "/dev/loop4"
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            ],
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "lv_name": "ceph_lv1",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "lv_size": "21470642176",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "name": "ceph_lv1",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "tags": {
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.cluster_name": "ceph",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.crush_device_class": "",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.encrypted": "0",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.osd_id": "1",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.type": "block",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.vdo": "0"
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            },
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "type": "block",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "vg_name": "ceph_vg1"
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:        }
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:    ],
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:    "2": [
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:        {
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "devices": [
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "/dev/loop5"
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            ],
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "lv_name": "ceph_lv2",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "lv_size": "21470642176",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "name": "ceph_lv2",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "tags": {
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.cluster_name": "ceph",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.crush_device_class": "",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.encrypted": "0",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.osd_id": "2",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.type": "block",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:                "ceph.vdo": "0"
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            },
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "type": "block",
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:            "vg_name": "ceph_vg2"
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:        }
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]:    ]
Nov 24 13:32:10 np0005533938 exciting_khorana[150522]: }
Nov 24 13:32:10 np0005533938 systemd[1]: libpod-e016556bf05c3f74b7a9a18ff6a20080a3eb359be381a7607c3b57ef0467fdca.scope: Deactivated successfully.
Nov 24 13:32:10 np0005533938 conmon[150522]: conmon e016556bf05c3f74b7a9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e016556bf05c3f74b7a9a18ff6a20080a3eb359be381a7607c3b57ef0467fdca.scope/container/memory.events
Nov 24 13:32:10 np0005533938 podman[150506]: 2025-11-24 18:32:10.02433164 +0000 UTC m=+0.845742983 container died e016556bf05c3f74b7a9a18ff6a20080a3eb359be381a7607c3b57ef0467fdca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:32:10 np0005533938 systemd[1]: var-lib-containers-storage-overlay-729504113eb1c468da14c3d70930e90fe48d1f274b842d0bd495a5c290bfef91-merged.mount: Deactivated successfully.
Nov 24 13:32:10 np0005533938 podman[150506]: 2025-11-24 18:32:10.126836678 +0000 UTC m=+0.948248031 container remove e016556bf05c3f74b7a9a18ff6a20080a3eb359be381a7607c3b57ef0467fdca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khorana, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 13:32:10 np0005533938 systemd[1]: libpod-conmon-e016556bf05c3f74b7a9a18ff6a20080a3eb359be381a7607c3b57ef0467fdca.scope: Deactivated successfully.
Nov 24 13:32:10 np0005533938 podman[150759]: 2025-11-24 18:32:10.754766455 +0000 UTC m=+0.046266421 container create 6a6924c446872f1b238103b5b44e260d8d913cc242a7b42fcdaac5c1db614fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 24 13:32:10 np0005533938 systemd[1]: Started libpod-conmon-6a6924c446872f1b238103b5b44e260d8d913cc242a7b42fcdaac5c1db614fb9.scope.
Nov 24 13:32:10 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:32:10 np0005533938 podman[150759]: 2025-11-24 18:32:10.732628036 +0000 UTC m=+0.024128002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:32:10 np0005533938 podman[150759]: 2025-11-24 18:32:10.834587619 +0000 UTC m=+0.126087575 container init 6a6924c446872f1b238103b5b44e260d8d913cc242a7b42fcdaac5c1db614fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 13:32:10 np0005533938 podman[150759]: 2025-11-24 18:32:10.84276814 +0000 UTC m=+0.134268096 container start 6a6924c446872f1b238103b5b44e260d8d913cc242a7b42fcdaac5c1db614fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hopper, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 13:32:10 np0005533938 podman[150759]: 2025-11-24 18:32:10.846718941 +0000 UTC m=+0.138218917 container attach 6a6924c446872f1b238103b5b44e260d8d913cc242a7b42fcdaac5c1db614fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hopper, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 13:32:10 np0005533938 bold_hopper[150776]: 167 167
Nov 24 13:32:10 np0005533938 systemd[1]: libpod-6a6924c446872f1b238103b5b44e260d8d913cc242a7b42fcdaac5c1db614fb9.scope: Deactivated successfully.
Nov 24 13:32:10 np0005533938 podman[150759]: 2025-11-24 18:32:10.849215186 +0000 UTC m=+0.140715162 container died 6a6924c446872f1b238103b5b44e260d8d913cc242a7b42fcdaac5c1db614fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 13:32:10 np0005533938 systemd[1]: var-lib-containers-storage-overlay-43e474309d5feea4339de344308dc1df45bd266d4b874e3cd055203d364d3b22-merged.mount: Deactivated successfully.
Nov 24 13:32:10 np0005533938 podman[150759]: 2025-11-24 18:32:10.906099259 +0000 UTC m=+0.197599215 container remove 6a6924c446872f1b238103b5b44e260d8d913cc242a7b42fcdaac5c1db614fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 13:32:10 np0005533938 systemd[1]: libpod-conmon-6a6924c446872f1b238103b5b44e260d8d913cc242a7b42fcdaac5c1db614fb9.scope: Deactivated successfully.
Nov 24 13:32:11 np0005533938 podman[150800]: 2025-11-24 18:32:11.06896739 +0000 UTC m=+0.042189086 container create 61706b493dbc806153bfd46a44f6ed2d0b211ee5d1ee7e8f1e468127ee4adb23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_borg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 24 13:32:11 np0005533938 systemd[1]: Started libpod-conmon-61706b493dbc806153bfd46a44f6ed2d0b211ee5d1ee7e8f1e468127ee4adb23.scope.
Nov 24 13:32:11 np0005533938 podman[150800]: 2025-11-24 18:32:11.053229475 +0000 UTC m=+0.026451181 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:32:11 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:32:11 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1824eef510f4d3e822fb61fc9033349f9dc2e75e29c4fadcef238f10d19b6974/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:32:11 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1824eef510f4d3e822fb61fc9033349f9dc2e75e29c4fadcef238f10d19b6974/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:32:11 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1824eef510f4d3e822fb61fc9033349f9dc2e75e29c4fadcef238f10d19b6974/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:32:11 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1824eef510f4d3e822fb61fc9033349f9dc2e75e29c4fadcef238f10d19b6974/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:32:11 np0005533938 podman[150800]: 2025-11-24 18:32:11.166643344 +0000 UTC m=+0.139865070 container init 61706b493dbc806153bfd46a44f6ed2d0b211ee5d1ee7e8f1e468127ee4adb23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 13:32:11 np0005533938 podman[150800]: 2025-11-24 18:32:11.173342296 +0000 UTC m=+0.146563982 container start 61706b493dbc806153bfd46a44f6ed2d0b211ee5d1ee7e8f1e468127ee4adb23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_borg, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:32:11 np0005533938 podman[150800]: 2025-11-24 18:32:11.176309292 +0000 UTC m=+0.149530968 container attach 61706b493dbc806153bfd46a44f6ed2d0b211ee5d1ee7e8f1e468127ee4adb23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 13:32:11 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 13:32:11 np0005533938 python3.9[150971]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:32:12 np0005533938 busy_borg[150821]: {
Nov 24 13:32:12 np0005533938 busy_borg[150821]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:32:12 np0005533938 busy_borg[150821]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:32:12 np0005533938 busy_borg[150821]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:32:12 np0005533938 busy_borg[150821]:        "osd_id": 0,
Nov 24 13:32:12 np0005533938 busy_borg[150821]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:32:12 np0005533938 busy_borg[150821]:        "type": "bluestore"
Nov 24 13:32:12 np0005533938 busy_borg[150821]:    },
Nov 24 13:32:12 np0005533938 busy_borg[150821]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:32:12 np0005533938 busy_borg[150821]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:32:12 np0005533938 busy_borg[150821]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:32:12 np0005533938 busy_borg[150821]:        "osd_id": 1,
Nov 24 13:32:12 np0005533938 busy_borg[150821]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:32:12 np0005533938 busy_borg[150821]:        "type": "bluestore"
Nov 24 13:32:12 np0005533938 busy_borg[150821]:    },
Nov 24 13:32:12 np0005533938 busy_borg[150821]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:32:12 np0005533938 busy_borg[150821]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:32:12 np0005533938 busy_borg[150821]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:32:12 np0005533938 busy_borg[150821]:        "osd_id": 2,
Nov 24 13:32:12 np0005533938 busy_borg[150821]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:32:12 np0005533938 busy_borg[150821]:        "type": "bluestore"
Nov 24 13:32:12 np0005533938 busy_borg[150821]:    }
Nov 24 13:32:12 np0005533938 busy_borg[150821]: }
Nov 24 13:32:12 np0005533938 systemd[1]: libpod-61706b493dbc806153bfd46a44f6ed2d0b211ee5d1ee7e8f1e468127ee4adb23.scope: Deactivated successfully.
Nov 24 13:32:12 np0005533938 podman[151001]: 2025-11-24 18:32:12.14703536 +0000 UTC m=+0.020585410 container died 61706b493dbc806153bfd46a44f6ed2d0b211ee5d1ee7e8f1e468127ee4adb23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_borg, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:32:12 np0005533938 systemd[1]: var-lib-containers-storage-overlay-1824eef510f4d3e822fb61fc9033349f9dc2e75e29c4fadcef238f10d19b6974-merged.mount: Deactivated successfully.
Nov 24 13:32:12 np0005533938 podman[151001]: 2025-11-24 18:32:12.19986618 +0000 UTC m=+0.073416240 container remove 61706b493dbc806153bfd46a44f6ed2d0b211ee5d1ee7e8f1e468127ee4adb23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_borg, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 13:32:12 np0005533938 systemd[1]: libpod-conmon-61706b493dbc806153bfd46a44f6ed2d0b211ee5d1ee7e8f1e468127ee4adb23.scope: Deactivated successfully.
Nov 24 13:32:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:32:12 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:32:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:32:12 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:32:12 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 151f78d7-dfb8-461e-95aa-0968b06a48f7 does not exist
Nov 24 13:32:12 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev f9841239-336b-49d5-9056-563aa15af362 does not exist
Nov 24 13:32:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:32:13 np0005533938 python3.9[151215]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 13:32:13 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 13:32:13 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:32:13 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:32:13 np0005533938 python3.9[151365]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:32:14 np0005533938 python3.9[151515]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:32:14 np0005533938 systemd[1]: session-45.scope: Deactivated successfully.
Nov 24 13:32:14 np0005533938 systemd[1]: session-45.scope: Consumed 5.645s CPU time.
Nov 24 13:32:14 np0005533938 systemd-logind[822]: Session 45 logged out. Waiting for processes to exit.
Nov 24 13:32:14 np0005533938 systemd-logind[822]: Removed session 45.
Nov 24 13:32:15 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 13:32:17 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:32:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:32:19 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:32:20 np0005533938 systemd-logind[822]: New session 46 of user zuul.
Nov 24 13:32:20 np0005533938 systemd[1]: Started Session 46 of User zuul.
Nov 24 13:32:21 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:32:21 np0005533938 python3.9[151693]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:32:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:32:23 np0005533938 python3.9[151849]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:32:23 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:32:23 np0005533938 python3.9[152001]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:32:24 np0005533938 python3.9[152153]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:32:25 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:32:25 np0005533938 python3.9[152276]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009144.1494124-65-121912473418158/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=04ab0229204e8e683e25d7b389e5447dda25fab6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:32:26 np0005533938 python3.9[152428]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:32:26 np0005533938 python3.9[152551]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009145.837043-65-177097578861882/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=79429362a394ef2683f794df52ffa3b38ef1c939 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:32:27 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:32:27 np0005533938 python3.9[152703]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:32:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:32:28 np0005533938 python3.9[152826]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009147.0485177-65-155868746217925/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=7518fc18a6b36988d98be0ee7f2c8b7779ca174f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:32:28 np0005533938 python3.9[152978]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:32:29 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:32:29 np0005533938 python3.9[153130]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:32:30 np0005533938 python3.9[153282]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:32:30 np0005533938 python3.9[153405]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009149.6092713-124-1261775699706/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b798c7d4884914f8199c0298f01b39ef12806173 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:32:31 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:32:31 np0005533938 python3.9[153557]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:32:31 np0005533938 python3.9[153680]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009150.8053813-124-191685444811499/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=e002ea2e2d89648d7a0d696996ed799d0e5d34b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:32:32 np0005533938 python3.9[153832]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:32:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:32:32 np0005533938 python3.9[153955]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009151.982079-124-179531720915942/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=f6526974e9bafe125505ea4c1e3ecfa5aecfb306 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:32:33 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:32:33 np0005533938 python3.9[154107]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:32:34 np0005533938 python3.9[154259]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:32:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:32:34
Nov 24 13:32:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:32:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:32:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'volumes', 'images', 'default.rgw.log', '.mgr', 'vms']
Nov 24 13:32:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:32:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:32:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:32:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:32:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:32:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:32:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:32:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:32:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:32:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:32:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:32:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:32:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:32:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:32:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:32:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:32:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:32:34 np0005533938 python3.9[154411]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:32:35 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:32:35 np0005533938 python3.9[154534]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009154.5091846-183-171031919124707/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=705500fe9885935f2329f2ca970fd4743071d167 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:32:36 np0005533938 python3.9[154686]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:32:36 np0005533938 python3.9[154809]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009155.6244013-183-9032051815524/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=e002ea2e2d89648d7a0d696996ed799d0e5d34b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:32:37 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:32:37 np0005533938 python3.9[154961]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:32:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:32:37 np0005533938 python3.9[155084]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009156.7364738-183-115519125593309/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=d63a741b9142b27415a97a0572bea2566e38144d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:32:39 np0005533938 python3.9[155236]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:32:39 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:32:39 np0005533938 python3.9[155388]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:32:40 np0005533938 python3.9[155511]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009159.2682621-251-213364431235913/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4453bc72f5dea8ea952ecd01786d1a0544923cc0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:32:41 np0005533938 python3.9[155663]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:32:41 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:32:41 np0005533938 python3.9[155815]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:32:42 np0005533938 python3.9[155938]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009161.3265781-275-9729059007512/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4453bc72f5dea8ea952ecd01786d1a0544923cc0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:32:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:32:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:32:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:32:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:32:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:32:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:32:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:32:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:32:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:32:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:32:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:32:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:32:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:32:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:32:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:32:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:32:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:32:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:32:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:32:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:32:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:32:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:32:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:32:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:32:43 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:32:43 np0005533938 python3.9[156090]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:32:44 np0005533938 python3.9[156242]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:32:44 np0005533938 python3.9[156365]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009163.512652-299-49964354913980/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4453bc72f5dea8ea952ecd01786d1a0544923cc0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:32:45 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:32:45 np0005533938 python3.9[156517]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:32:46 np0005533938 python3.9[156669]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:32:46 np0005533938 python3.9[156792]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009165.6151438-323-272466191618071/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4453bc72f5dea8ea952ecd01786d1a0544923cc0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:32:47 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:32:47 np0005533938 python3.9[156944]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:32:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:32:48 np0005533938 python3.9[157096]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:32:48 np0005533938 python3.9[157219]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009167.7751353-347-101961344962667/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4453bc72f5dea8ea952ecd01786d1a0544923cc0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:32:49 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:32:49 np0005533938 python3.9[157371]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:32:50 np0005533938 python3.9[157523]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:32:50 np0005533938 python3.9[157646]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009169.586473-371-113639924924673/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4453bc72f5dea8ea952ecd01786d1a0544923cc0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:32:50 np0005533938 systemd[1]: session-46.scope: Deactivated successfully.
Nov 24 13:32:50 np0005533938 systemd[1]: session-46.scope: Consumed 22.298s CPU time.
Nov 24 13:32:50 np0005533938 systemd-logind[822]: Session 46 logged out. Waiting for processes to exit.
Nov 24 13:32:50 np0005533938 systemd-logind[822]: Removed session 46.
Nov 24 13:32:51 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:32:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:32:53 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:32:55 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:32:56 np0005533938 systemd-logind[822]: New session 47 of user zuul.
Nov 24 13:32:56 np0005533938 systemd[1]: Started Session 47 of User zuul.
Nov 24 13:32:57 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:32:57 np0005533938 python3.9[157827]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:32:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:32:58 np0005533938 python3.9[157979]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:32:59 np0005533938 python3.9[158102]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009177.697789-34-127937099787305/.source.conf _original_basename=ceph.conf follow=False checksum=e6376665f4d651a92ab919b303c349cf96ae8bd0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:32:59 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:00 np0005533938 python3.9[158254]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:33:00 np0005533938 python3.9[158377]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009179.7347126-34-70650416358500/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=da81228d7cc67f3a06b39ee156e276fa0a4ebf0e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:33:01 np0005533938 systemd[1]: session-47.scope: Deactivated successfully.
Nov 24 13:33:01 np0005533938 systemd[1]: session-47.scope: Consumed 2.401s CPU time.
Nov 24 13:33:01 np0005533938 systemd-logind[822]: Session 47 logged out. Waiting for processes to exit.
Nov 24 13:33:01 np0005533938 systemd-logind[822]: Removed session 47.
Nov 24 13:33:01 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:33:03 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:33:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:33:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:33:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:33:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:33:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:33:05 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:06 np0005533938 systemd-logind[822]: New session 48 of user zuul.
Nov 24 13:33:06 np0005533938 systemd[1]: Started Session 48 of User zuul.
Nov 24 13:33:07 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:07 np0005533938 python3.9[158555]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:33:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:33:08 np0005533938 python3.9[158711]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:33:09 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:09 np0005533938 python3.9[158863]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:33:10 np0005533938 python3.9[159013]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:33:11 np0005533938 python3.9[159165]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 24 13:33:11 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:12 np0005533938 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 24 13:33:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:33:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:33:13 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:33:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:33:13 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:33:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:33:13 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:33:13 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev c9729518-895f-4a87-acb8-4f0d81de12a8 does not exist
Nov 24 13:33:13 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 70078941-b5ca-4602-a0d5-ad3292efc7d6 does not exist
Nov 24 13:33:13 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev bf0e80ae-d9d1-4917-9520-4aa730585040 does not exist
Nov 24 13:33:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:33:13 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:33:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:33:13 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:33:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:33:13 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:33:13 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:33:13 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:33:13 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:33:13 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:13 np0005533938 python3.9[159452]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 13:33:13 np0005533938 podman[159601]: 2025-11-24 18:33:13.616825919 +0000 UTC m=+0.034104447 container create 00ff2a7ea718990ee0b838e133fb566a6abcab6d3eb40fafc691b9fbedbaaa57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 13:33:13 np0005533938 systemd[1]: Started libpod-conmon-00ff2a7ea718990ee0b838e133fb566a6abcab6d3eb40fafc691b9fbedbaaa57.scope.
Nov 24 13:33:13 np0005533938 podman[159601]: 2025-11-24 18:33:13.601102605 +0000 UTC m=+0.018381133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:33:13 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:33:13 np0005533938 podman[159601]: 2025-11-24 18:33:13.714690507 +0000 UTC m=+0.131969035 container init 00ff2a7ea718990ee0b838e133fb566a6abcab6d3eb40fafc691b9fbedbaaa57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:33:13 np0005533938 podman[159601]: 2025-11-24 18:33:13.7216011 +0000 UTC m=+0.138879628 container start 00ff2a7ea718990ee0b838e133fb566a6abcab6d3eb40fafc691b9fbedbaaa57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldwasser, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 24 13:33:13 np0005533938 podman[159601]: 2025-11-24 18:33:13.724835942 +0000 UTC m=+0.142114470 container attach 00ff2a7ea718990ee0b838e133fb566a6abcab6d3eb40fafc691b9fbedbaaa57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldwasser, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 13:33:13 np0005533938 amazing_goldwasser[159617]: 167 167
Nov 24 13:33:13 np0005533938 systemd[1]: libpod-00ff2a7ea718990ee0b838e133fb566a6abcab6d3eb40fafc691b9fbedbaaa57.scope: Deactivated successfully.
Nov 24 13:33:13 np0005533938 podman[159601]: 2025-11-24 18:33:13.727514759 +0000 UTC m=+0.144793287 container died 00ff2a7ea718990ee0b838e133fb566a6abcab6d3eb40fafc691b9fbedbaaa57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 24 13:33:13 np0005533938 systemd[1]: var-lib-containers-storage-overlay-22cb7840db99737ea2ffa142dffa520d8759d60fe1c5d0d13267f0dbb93278f6-merged.mount: Deactivated successfully.
Nov 24 13:33:13 np0005533938 podman[159601]: 2025-11-24 18:33:13.775442762 +0000 UTC m=+0.192721310 container remove 00ff2a7ea718990ee0b838e133fb566a6abcab6d3eb40fafc691b9fbedbaaa57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 24 13:33:13 np0005533938 systemd[1]: libpod-conmon-00ff2a7ea718990ee0b838e133fb566a6abcab6d3eb40fafc691b9fbedbaaa57.scope: Deactivated successfully.
Nov 24 13:33:13 np0005533938 podman[159664]: 2025-11-24 18:33:13.925985732 +0000 UTC m=+0.036251201 container create 5a7bbce6c6340dba2df1fd221a24cbee0bd4b99f05de42c394ad0ec910324319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_villani, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 13:33:13 np0005533938 systemd[1]: Started libpod-conmon-5a7bbce6c6340dba2df1fd221a24cbee0bd4b99f05de42c394ad0ec910324319.scope.
Nov 24 13:33:14 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:33:14 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b982657825c3c6b6494c9896a8448cb8af5db784e2e7b664f9c15d4e196392f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:33:14 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b982657825c3c6b6494c9896a8448cb8af5db784e2e7b664f9c15d4e196392f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:33:14 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b982657825c3c6b6494c9896a8448cb8af5db784e2e7b664f9c15d4e196392f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:33:14 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b982657825c3c6b6494c9896a8448cb8af5db784e2e7b664f9c15d4e196392f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:33:14 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b982657825c3c6b6494c9896a8448cb8af5db784e2e7b664f9c15d4e196392f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:33:14 np0005533938 podman[159664]: 2025-11-24 18:33:13.910390721 +0000 UTC m=+0.020656190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:33:14 np0005533938 podman[159664]: 2025-11-24 18:33:14.013805478 +0000 UTC m=+0.124070947 container init 5a7bbce6c6340dba2df1fd221a24cbee0bd4b99f05de42c394ad0ec910324319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 13:33:14 np0005533938 podman[159664]: 2025-11-24 18:33:14.020623159 +0000 UTC m=+0.130888608 container start 5a7bbce6c6340dba2df1fd221a24cbee0bd4b99f05de42c394ad0ec910324319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:33:14 np0005533938 podman[159664]: 2025-11-24 18:33:14.024194338 +0000 UTC m=+0.134459867 container attach 5a7bbce6c6340dba2df1fd221a24cbee0bd4b99f05de42c394ad0ec910324319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 13:33:14 np0005533938 python3.9[159738]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:33:15 np0005533938 upbeat_villani[159705]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:33:15 np0005533938 upbeat_villani[159705]: --> relative data size: 1.0
Nov 24 13:33:15 np0005533938 upbeat_villani[159705]: --> All data devices are unavailable
Nov 24 13:33:15 np0005533938 systemd[1]: libpod-5a7bbce6c6340dba2df1fd221a24cbee0bd4b99f05de42c394ad0ec910324319.scope: Deactivated successfully.
Nov 24 13:33:15 np0005533938 podman[159664]: 2025-11-24 18:33:15.06925197 +0000 UTC m=+1.179517419 container died 5a7bbce6c6340dba2df1fd221a24cbee0bd4b99f05de42c394ad0ec910324319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_villani, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 13:33:15 np0005533938 systemd[1]: libpod-5a7bbce6c6340dba2df1fd221a24cbee0bd4b99f05de42c394ad0ec910324319.scope: Consumed 1.003s CPU time.
Nov 24 13:33:15 np0005533938 systemd[1]: var-lib-containers-storage-overlay-8b982657825c3c6b6494c9896a8448cb8af5db784e2e7b664f9c15d4e196392f-merged.mount: Deactivated successfully.
Nov 24 13:33:15 np0005533938 podman[159664]: 2025-11-24 18:33:15.123085391 +0000 UTC m=+1.233350840 container remove 5a7bbce6c6340dba2df1fd221a24cbee0bd4b99f05de42c394ad0ec910324319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:33:15 np0005533938 systemd[1]: libpod-conmon-5a7bbce6c6340dba2df1fd221a24cbee0bd4b99f05de42c394ad0ec910324319.scope: Deactivated successfully.
Nov 24 13:33:15 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:15 np0005533938 podman[159943]: 2025-11-24 18:33:15.72796623 +0000 UTC m=+0.058199503 container create b6bb91aa4e5296d652eb9c37d2b6b8bfd83b20d69c7e19fe8125b1f48ed639c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rubin, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 13:33:15 np0005533938 systemd[1]: Started libpod-conmon-b6bb91aa4e5296d652eb9c37d2b6b8bfd83b20d69c7e19fe8125b1f48ed639c2.scope.
Nov 24 13:33:15 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:33:15 np0005533938 podman[159943]: 2025-11-24 18:33:15.706043979 +0000 UTC m=+0.036277332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:33:15 np0005533938 podman[159943]: 2025-11-24 18:33:15.811273872 +0000 UTC m=+0.141507155 container init b6bb91aa4e5296d652eb9c37d2b6b8bfd83b20d69c7e19fe8125b1f48ed639c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 13:33:15 np0005533938 podman[159943]: 2025-11-24 18:33:15.816532484 +0000 UTC m=+0.146765747 container start b6bb91aa4e5296d652eb9c37d2b6b8bfd83b20d69c7e19fe8125b1f48ed639c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rubin, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 13:33:15 np0005533938 podman[159943]: 2025-11-24 18:33:15.819290623 +0000 UTC m=+0.149523926 container attach b6bb91aa4e5296d652eb9c37d2b6b8bfd83b20d69c7e19fe8125b1f48ed639c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 13:33:15 np0005533938 angry_rubin[160010]: 167 167
Nov 24 13:33:15 np0005533938 systemd[1]: libpod-b6bb91aa4e5296d652eb9c37d2b6b8bfd83b20d69c7e19fe8125b1f48ed639c2.scope: Deactivated successfully.
Nov 24 13:33:15 np0005533938 podman[159943]: 2025-11-24 18:33:15.822511234 +0000 UTC m=+0.152744497 container died b6bb91aa4e5296d652eb9c37d2b6b8bfd83b20d69c7e19fe8125b1f48ed639c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rubin, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:33:15 np0005533938 systemd[1]: var-lib-containers-storage-overlay-f1952e9e19e9069218ee51eb33abc0a3c68fd9a2faa7eb25d7f06e2a72bfe9e0-merged.mount: Deactivated successfully.
Nov 24 13:33:15 np0005533938 podman[159943]: 2025-11-24 18:33:15.862105978 +0000 UTC m=+0.192339251 container remove b6bb91aa4e5296d652eb9c37d2b6b8bfd83b20d69c7e19fe8125b1f48ed639c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rubin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 13:33:15 np0005533938 systemd[1]: libpod-conmon-b6bb91aa4e5296d652eb9c37d2b6b8bfd83b20d69c7e19fe8125b1f48ed639c2.scope: Deactivated successfully.
Nov 24 13:33:16 np0005533938 podman[160035]: 2025-11-24 18:33:16.020195488 +0000 UTC m=+0.042725704 container create 131ecdd2bde7ba6ab243c566cb7d0312d2c766029fd9c56d2e0939e69dcf9806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_spence, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 13:33:16 np0005533938 systemd[1]: Started libpod-conmon-131ecdd2bde7ba6ab243c566cb7d0312d2c766029fd9c56d2e0939e69dcf9806.scope.
Nov 24 13:33:16 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:33:16 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3848413bf1ba806ff64ad79c8351e588a3fbbb2c6c181df923039169970b08d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:33:16 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3848413bf1ba806ff64ad79c8351e588a3fbbb2c6c181df923039169970b08d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:33:16 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3848413bf1ba806ff64ad79c8351e588a3fbbb2c6c181df923039169970b08d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:33:16 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3848413bf1ba806ff64ad79c8351e588a3fbbb2c6c181df923039169970b08d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:33:16 np0005533938 podman[160035]: 2025-11-24 18:33:16.00197848 +0000 UTC m=+0.024508746 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:33:16 np0005533938 podman[160035]: 2025-11-24 18:33:16.109610093 +0000 UTC m=+0.132140349 container init 131ecdd2bde7ba6ab243c566cb7d0312d2c766029fd9c56d2e0939e69dcf9806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_spence, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:33:16 np0005533938 podman[160035]: 2025-11-24 18:33:16.121169633 +0000 UTC m=+0.143699879 container start 131ecdd2bde7ba6ab243c566cb7d0312d2c766029fd9c56d2e0939e69dcf9806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_spence, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 13:33:16 np0005533938 podman[160035]: 2025-11-24 18:33:16.12463968 +0000 UTC m=+0.147169916 container attach 131ecdd2bde7ba6ab243c566cb7d0312d2c766029fd9c56d2e0939e69dcf9806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 13:33:16 np0005533938 python3.9[160132]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 13:33:16 np0005533938 objective_spence[160052]: {
Nov 24 13:33:16 np0005533938 objective_spence[160052]:    "0": [
Nov 24 13:33:16 np0005533938 objective_spence[160052]:        {
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "devices": [
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "/dev/loop3"
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            ],
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "lv_name": "ceph_lv0",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "lv_size": "21470642176",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "name": "ceph_lv0",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "tags": {
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.cluster_name": "ceph",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.crush_device_class": "",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.encrypted": "0",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.osd_id": "0",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.type": "block",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.vdo": "0"
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            },
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "type": "block",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "vg_name": "ceph_vg0"
Nov 24 13:33:16 np0005533938 objective_spence[160052]:        }
Nov 24 13:33:16 np0005533938 objective_spence[160052]:    ],
Nov 24 13:33:16 np0005533938 objective_spence[160052]:    "1": [
Nov 24 13:33:16 np0005533938 objective_spence[160052]:        {
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "devices": [
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "/dev/loop4"
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            ],
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "lv_name": "ceph_lv1",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "lv_size": "21470642176",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "name": "ceph_lv1",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "tags": {
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.cluster_name": "ceph",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.crush_device_class": "",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.encrypted": "0",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.osd_id": "1",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.type": "block",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.vdo": "0"
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            },
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "type": "block",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "vg_name": "ceph_vg1"
Nov 24 13:33:16 np0005533938 objective_spence[160052]:        }
Nov 24 13:33:16 np0005533938 objective_spence[160052]:    ],
Nov 24 13:33:16 np0005533938 objective_spence[160052]:    "2": [
Nov 24 13:33:16 np0005533938 objective_spence[160052]:        {
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "devices": [
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "/dev/loop5"
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            ],
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "lv_name": "ceph_lv2",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "lv_size": "21470642176",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "name": "ceph_lv2",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "tags": {
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.cluster_name": "ceph",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.crush_device_class": "",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.encrypted": "0",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.osd_id": "2",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.type": "block",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:                "ceph.vdo": "0"
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            },
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "type": "block",
Nov 24 13:33:16 np0005533938 objective_spence[160052]:            "vg_name": "ceph_vg2"
Nov 24 13:33:16 np0005533938 objective_spence[160052]:        }
Nov 24 13:33:16 np0005533938 objective_spence[160052]:    ]
Nov 24 13:33:16 np0005533938 objective_spence[160052]: }
Nov 24 13:33:16 np0005533938 systemd[1]: libpod-131ecdd2bde7ba6ab243c566cb7d0312d2c766029fd9c56d2e0939e69dcf9806.scope: Deactivated successfully.
Nov 24 13:33:16 np0005533938 podman[160035]: 2025-11-24 18:33:16.880765275 +0000 UTC m=+0.903295491 container died 131ecdd2bde7ba6ab243c566cb7d0312d2c766029fd9c56d2e0939e69dcf9806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_spence, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 13:33:16 np0005533938 systemd[1]: var-lib-containers-storage-overlay-3848413bf1ba806ff64ad79c8351e588a3fbbb2c6c181df923039169970b08d8-merged.mount: Deactivated successfully.
Nov 24 13:33:16 np0005533938 podman[160035]: 2025-11-24 18:33:16.937547961 +0000 UTC m=+0.960078167 container remove 131ecdd2bde7ba6ab243c566cb7d0312d2c766029fd9c56d2e0939e69dcf9806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_spence, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 13:33:16 np0005533938 systemd[1]: libpod-conmon-131ecdd2bde7ba6ab243c566cb7d0312d2c766029fd9c56d2e0939e69dcf9806.scope: Deactivated successfully.
Nov 24 13:33:17 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.294878) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009197294936, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2040, "num_deletes": 251, "total_data_size": 3487867, "memory_usage": 3549632, "flush_reason": "Manual Compaction"}
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009197312121, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3412733, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9723, "largest_seqno": 11762, "table_properties": {"data_size": 3403444, "index_size": 5911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17818, "raw_average_key_size": 19, "raw_value_size": 3385063, "raw_average_value_size": 3695, "num_data_blocks": 268, "num_entries": 916, "num_filter_entries": 916, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008966, "oldest_key_time": 1764008966, "file_creation_time": 1764009197, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 17285 microseconds, and 8081 cpu microseconds.
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.312163) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3412733 bytes OK
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.312181) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.313455) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.313468) EVENT_LOG_v1 {"time_micros": 1764009197313464, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.313486) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3479360, prev total WAL file size 3479360, number of live WAL files 2.
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.314665) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3332KB)], [26(5999KB)]
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009197314693, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9555960, "oldest_snapshot_seqno": -1}
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3714 keys, 7820080 bytes, temperature: kUnknown
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009197356997, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7820080, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7791670, "index_size": 17996, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9349, "raw_key_size": 89270, "raw_average_key_size": 24, "raw_value_size": 7721053, "raw_average_value_size": 2078, "num_data_blocks": 779, "num_entries": 3714, "num_filter_entries": 3714, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764009197, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.357249) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7820080 bytes
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.358580) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 225.5 rd, 184.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.9 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4228, records dropped: 514 output_compression: NoCompression
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.358601) EVENT_LOG_v1 {"time_micros": 1764009197358591, "job": 10, "event": "compaction_finished", "compaction_time_micros": 42379, "compaction_time_cpu_micros": 17150, "output_level": 6, "num_output_files": 1, "total_output_size": 7820080, "num_input_records": 4228, "num_output_records": 3714, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009197359412, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009197360806, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.314608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.360858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.360864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.360867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.360870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:33:17.361260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:33:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:33:17 np0005533938 podman[160288]: 2025-11-24 18:33:17.598369014 +0000 UTC m=+0.046716314 container create d5da9735b4016ecfac5ff076c9351cd63c44e2b178b24336bcf8de7c49857437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:33:17 np0005533938 systemd[1]: Started libpod-conmon-d5da9735b4016ecfac5ff076c9351cd63c44e2b178b24336bcf8de7c49857437.scope.
Nov 24 13:33:17 np0005533938 podman[160288]: 2025-11-24 18:33:17.572432893 +0000 UTC m=+0.020780253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:33:17 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:33:17 np0005533938 podman[160288]: 2025-11-24 18:33:17.683108422 +0000 UTC m=+0.131455682 container init d5da9735b4016ecfac5ff076c9351cd63c44e2b178b24336bcf8de7c49857437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 13:33:17 np0005533938 podman[160288]: 2025-11-24 18:33:17.694246842 +0000 UTC m=+0.142594112 container start d5da9735b4016ecfac5ff076c9351cd63c44e2b178b24336bcf8de7c49857437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 13:33:17 np0005533938 podman[160288]: 2025-11-24 18:33:17.697535724 +0000 UTC m=+0.145882994 container attach d5da9735b4016ecfac5ff076c9351cd63c44e2b178b24336bcf8de7c49857437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:33:17 np0005533938 zealous_haibt[160304]: 167 167
Nov 24 13:33:17 np0005533938 systemd[1]: libpod-d5da9735b4016ecfac5ff076c9351cd63c44e2b178b24336bcf8de7c49857437.scope: Deactivated successfully.
Nov 24 13:33:17 np0005533938 podman[160288]: 2025-11-24 18:33:17.700444777 +0000 UTC m=+0.148792097 container died d5da9735b4016ecfac5ff076c9351cd63c44e2b178b24336bcf8de7c49857437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:33:17 np0005533938 systemd[1]: var-lib-containers-storage-overlay-6353e69641e3e22060e3494312397b96b0064453e408941b77fd5bf106136d07-merged.mount: Deactivated successfully.
Nov 24 13:33:17 np0005533938 podman[160288]: 2025-11-24 18:33:17.740173325 +0000 UTC m=+0.188520605 container remove d5da9735b4016ecfac5ff076c9351cd63c44e2b178b24336bcf8de7c49857437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 13:33:17 np0005533938 systemd[1]: libpod-conmon-d5da9735b4016ecfac5ff076c9351cd63c44e2b178b24336bcf8de7c49857437.scope: Deactivated successfully.
Nov 24 13:33:17 np0005533938 podman[160329]: 2025-11-24 18:33:17.891652899 +0000 UTC m=+0.050121760 container create 679499359f3ac5dbed96ff64eac0675a1c20e6e4ba5bde45451656a918b523e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_chatterjee, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:33:17 np0005533938 systemd[1]: Started libpod-conmon-679499359f3ac5dbed96ff64eac0675a1c20e6e4ba5bde45451656a918b523e4.scope.
Nov 24 13:33:17 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:33:17 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9c72e2e7a7ff53222e45a2eb1d1d3f35ad4465ef617b4d3606984079a7601d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:33:17 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9c72e2e7a7ff53222e45a2eb1d1d3f35ad4465ef617b4d3606984079a7601d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:33:17 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9c72e2e7a7ff53222e45a2eb1d1d3f35ad4465ef617b4d3606984079a7601d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:33:17 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9c72e2e7a7ff53222e45a2eb1d1d3f35ad4465ef617b4d3606984079a7601d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:33:17 np0005533938 podman[160329]: 2025-11-24 18:33:17.873075492 +0000 UTC m=+0.031544383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:33:17 np0005533938 podman[160329]: 2025-11-24 18:33:17.968150019 +0000 UTC m=+0.126618890 container init 679499359f3ac5dbed96ff64eac0675a1c20e6e4ba5bde45451656a918b523e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_chatterjee, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 24 13:33:17 np0005533938 podman[160329]: 2025-11-24 18:33:17.978213642 +0000 UTC m=+0.136682513 container start 679499359f3ac5dbed96ff64eac0675a1c20e6e4ba5bde45451656a918b523e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_chatterjee, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:33:17 np0005533938 podman[160329]: 2025-11-24 18:33:17.980947411 +0000 UTC m=+0.139416282 container attach 679499359f3ac5dbed96ff64eac0675a1c20e6e4ba5bde45451656a918b523e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:33:18 np0005533938 python3[160501]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 24 13:33:18 np0005533938 priceless_chatterjee[160369]: {
Nov 24 13:33:19 np0005533938 priceless_chatterjee[160369]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:33:19 np0005533938 priceless_chatterjee[160369]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:33:19 np0005533938 priceless_chatterjee[160369]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:33:19 np0005533938 priceless_chatterjee[160369]:        "osd_id": 0,
Nov 24 13:33:19 np0005533938 priceless_chatterjee[160369]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:33:19 np0005533938 priceless_chatterjee[160369]:        "type": "bluestore"
Nov 24 13:33:19 np0005533938 priceless_chatterjee[160369]:    },
Nov 24 13:33:19 np0005533938 priceless_chatterjee[160369]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:33:19 np0005533938 priceless_chatterjee[160369]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:33:19 np0005533938 priceless_chatterjee[160369]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:33:19 np0005533938 priceless_chatterjee[160369]:        "osd_id": 1,
Nov 24 13:33:19 np0005533938 priceless_chatterjee[160369]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:33:19 np0005533938 priceless_chatterjee[160369]:        "type": "bluestore"
Nov 24 13:33:19 np0005533938 priceless_chatterjee[160369]:    },
Nov 24 13:33:19 np0005533938 priceless_chatterjee[160369]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:33:19 np0005533938 priceless_chatterjee[160369]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:33:19 np0005533938 priceless_chatterjee[160369]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:33:19 np0005533938 priceless_chatterjee[160369]:        "osd_id": 2,
Nov 24 13:33:19 np0005533938 priceless_chatterjee[160369]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:33:19 np0005533938 priceless_chatterjee[160369]:        "type": "bluestore"
Nov 24 13:33:19 np0005533938 priceless_chatterjee[160369]:    }
Nov 24 13:33:19 np0005533938 priceless_chatterjee[160369]: }
Nov 24 13:33:19 np0005533938 systemd[1]: libpod-679499359f3ac5dbed96ff64eac0675a1c20e6e4ba5bde45451656a918b523e4.scope: Deactivated successfully.
Nov 24 13:33:19 np0005533938 systemd[1]: libpod-679499359f3ac5dbed96ff64eac0675a1c20e6e4ba5bde45451656a918b523e4.scope: Consumed 1.046s CPU time.
Nov 24 13:33:19 np0005533938 conmon[160369]: conmon 679499359f3ac5dbed96 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-679499359f3ac5dbed96ff64eac0675a1c20e6e4ba5bde45451656a918b523e4.scope/container/memory.events
Nov 24 13:33:19 np0005533938 podman[160329]: 2025-11-24 18:33:19.022161415 +0000 UTC m=+1.180630286 container died 679499359f3ac5dbed96ff64eac0675a1c20e6e4ba5bde45451656a918b523e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_chatterjee, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 13:33:19 np0005533938 systemd[1]: var-lib-containers-storage-overlay-af9c72e2e7a7ff53222e45a2eb1d1d3f35ad4465ef617b4d3606984079a7601d-merged.mount: Deactivated successfully.
Nov 24 13:33:19 np0005533938 podman[160329]: 2025-11-24 18:33:19.072701674 +0000 UTC m=+1.231170525 container remove 679499359f3ac5dbed96ff64eac0675a1c20e6e4ba5bde45451656a918b523e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:33:19 np0005533938 systemd[1]: libpod-conmon-679499359f3ac5dbed96ff64eac0675a1c20e6e4ba5bde45451656a918b523e4.scope: Deactivated successfully.
Nov 24 13:33:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:33:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:33:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:33:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:33:19 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 6d07fda0-cbdf-42da-8a6f-170c452d6dfe does not exist
Nov 24 13:33:19 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 3bab2517-246b-4fbf-b210-69d8b45c0fbf does not exist
Nov 24 13:33:19 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:19 np0005533938 python3.9[160743]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:33:20 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:33:20 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:33:20 np0005533938 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 13:33:20 np0005533938 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 13:33:20 np0005533938 python3.9[160896]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:33:20 np0005533938 python3.9[160974]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:33:21 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:21 np0005533938 python3.9[161126]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:33:21 np0005533938 python3.9[161204]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.4_u6k5rv recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:33:22 np0005533938 python3.9[161356]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:33:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:33:22 np0005533938 python3.9[161434]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:33:23 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:23 np0005533938 python3.9[161586]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:33:24 np0005533938 python3[161739]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 13:33:25 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:25 np0005533938 python3.9[161891]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:33:26 np0005533938 python3.9[162016]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009204.832712-157-187531896417261/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:33:26 np0005533938 python3.9[162168]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:33:27 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:27 np0005533938 python3.9[162293]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009206.3507297-172-223217342086402/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:33:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:33:28 np0005533938 python3.9[162445]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:33:28 np0005533938 python3.9[162570]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009207.5716815-187-69513261474954/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:33:29 np0005533938 python3.9[162722]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:33:29 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:29 np0005533938 python3.9[162847]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009208.7609055-202-119390797509032/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:33:30 np0005533938 python3.9[162999]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:33:31 np0005533938 python3.9[163124]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009209.99195-217-72918704209764/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:33:31 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:31 np0005533938 python3.9[163276]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:33:32 np0005533938 python3.9[163428]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:33:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:33:33 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:33 np0005533938 python3.9[163583]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:33:34 np0005533938 python3.9[163735]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:33:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:33:34
Nov 24 13:33:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:33:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:33:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['backups', 'vms', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', '.rgw.root']
Nov 24 13:33:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:33:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:33:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:33:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:33:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:33:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:33:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:33:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:33:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:33:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:33:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:33:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:33:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:33:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:33:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:33:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:33:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:33:34 np0005533938 python3.9[163888]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:33:35 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:35 np0005533938 python3.9[164042]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:33:36 np0005533938 python3.9[164197]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:33:37 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:33:37 np0005533938 python3.9[164347]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:33:38 np0005533938 python3.9[164500]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:93:45:69:49" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:33:38 np0005533938 ovs-vsctl[164501]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:93:45:69:49 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 24 13:33:39 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:39 np0005533938 python3.9[164653]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:33:40 np0005533938 python3.9[164808]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:33:40 np0005533938 ovs-vsctl[164809]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 24 13:33:41 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:41 np0005533938 python3.9[164959]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:33:42 np0005533938 python3.9[165113]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:33:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:33:42 np0005533938 python3.9[165265]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:33:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:33:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:33:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:33:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:33:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:33:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:33:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:33:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:33:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:33:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:33:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:33:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:33:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:33:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:33:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:33:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:33:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:33:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:33:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:33:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:33:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:33:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:33:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:33:43 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:43 np0005533938 python3.9[165343]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:33:44 np0005533938 python3.9[165495]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:33:44 np0005533938 python3.9[165573]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:33:45 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:45 np0005533938 python3.9[165725]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:33:46 np0005533938 python3.9[165877]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:33:46 np0005533938 python3.9[165955]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:33:47 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:47 np0005533938 python3.9[166107]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:33:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:33:47 np0005533938 python3.9[166185]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:33:48 np0005533938 python3.9[166337]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:33:48 np0005533938 systemd[1]: Reloading.
Nov 24 13:33:48 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:33:48 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:33:49 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:49 np0005533938 python3.9[166526]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:33:50 np0005533938 python3.9[166604]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:33:50 np0005533938 python3.9[166756]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:33:51 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:51 np0005533938 python3.9[166834]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:33:52 np0005533938 python3.9[166986]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:33:52 np0005533938 systemd[1]: Reloading.
Nov 24 13:33:52 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:33:52 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:33:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:33:52 np0005533938 systemd[1]: Starting Create netns directory...
Nov 24 13:33:52 np0005533938 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 13:33:52 np0005533938 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 13:33:52 np0005533938 systemd[1]: Finished Create netns directory.
Nov 24 13:33:53 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:53 np0005533938 python3.9[167179]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:33:54 np0005533938 python3.9[167331]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:33:55 np0005533938 python3.9[167454]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009233.8872583-468-106279206090496/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:33:55 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:56 np0005533938 python3.9[167606]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:33:56 np0005533938 python3.9[167758]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:33:57 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:33:57 np0005533938 python3.9[167881]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009236.2765903-493-110960222667399/.source.json _original_basename=.i2skxlep follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:33:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:33:58 np0005533938 python3.9[168033]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:33:59 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:00 np0005533938 python3.9[168461]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 24 13:34:01 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:01 np0005533938 python3.9[168613]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 13:34:02 np0005533938 python3.9[168765]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 24 13:34:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:34:03 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:04 np0005533938 python3[168944]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 13:34:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:34:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:34:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:34:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:34:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:34:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:34:05 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:07 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:34:08 np0005533938 podman[168957]: 2025-11-24 18:34:08.900029894 +0000 UTC m=+4.641267200 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 24 13:34:09 np0005533938 podman[169077]: 2025-11-24 18:34:09.046773563 +0000 UTC m=+0.055062422 container create 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:34:09 np0005533938 podman[169077]: 2025-11-24 18:34:09.01078114 +0000 UTC m=+0.019069979 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 24 13:34:09 np0005533938 python3[168944]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 24 13:34:09 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:09 np0005533938 python3.9[169267]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:34:10 np0005533938 python3.9[169421]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:34:10 np0005533938 python3.9[169497]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:34:11 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:11 np0005533938 python3.9[169648]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764009251.007633-581-209477034811657/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:34:12 np0005533938 python3.9[169724]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 13:34:12 np0005533938 systemd[1]: Reloading.
Nov 24 13:34:12 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:34:12 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:34:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:34:13 np0005533938 python3.9[169835]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:34:13 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:14 np0005533938 systemd[1]: Reloading.
Nov 24 13:34:14 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:34:14 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:34:14 np0005533938 systemd[1]: Starting ovn_controller container...
Nov 24 13:34:14 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:34:14 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d7ba1ba1118a7fab99be6adfd7018106b5ccb6e47b758d2c8e6c85a7bb6839d/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 24 13:34:14 np0005533938 systemd[1]: Started /usr/bin/podman healthcheck run 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d.
Nov 24 13:34:14 np0005533938 podman[169876]: 2025-11-24 18:34:14.580288249 +0000 UTC m=+0.094057469 container init 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:34:14 np0005533938 ovn_controller[169892]: + sudo -E kolla_set_configs
Nov 24 13:34:14 np0005533938 podman[169876]: 2025-11-24 18:34:14.600326131 +0000 UTC m=+0.114095351 container start 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 24 13:34:14 np0005533938 edpm-start-podman-container[169876]: ovn_controller
Nov 24 13:34:14 np0005533938 systemd[1]: Created slice User Slice of UID 0.
Nov 24 13:34:14 np0005533938 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 24 13:34:14 np0005533938 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 24 13:34:14 np0005533938 systemd[1]: Starting User Manager for UID 0...
Nov 24 13:34:14 np0005533938 edpm-start-podman-container[169875]: Creating additional drop-in dependency for "ovn_controller" (258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d)
Nov 24 13:34:14 np0005533938 podman[169899]: 2025-11-24 18:34:14.687570489 +0000 UTC m=+0.077976446 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 13:34:14 np0005533938 systemd[1]: 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d-73b1918b52bf1f65.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 13:34:14 np0005533938 systemd[1]: 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d-73b1918b52bf1f65.service: Failed with result 'exit-code'.
Nov 24 13:34:14 np0005533938 systemd[1]: Reloading.
Nov 24 13:34:14 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:34:14 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:34:14 np0005533938 systemd[169931]: Queued start job for default target Main User Target.
Nov 24 13:34:14 np0005533938 systemd[169931]: Created slice User Application Slice.
Nov 24 13:34:14 np0005533938 systemd[169931]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 24 13:34:14 np0005533938 systemd[169931]: Started Daily Cleanup of User's Temporary Directories.
Nov 24 13:34:14 np0005533938 systemd[169931]: Reached target Paths.
Nov 24 13:34:14 np0005533938 systemd[169931]: Reached target Timers.
Nov 24 13:34:14 np0005533938 systemd[169931]: Starting D-Bus User Message Bus Socket...
Nov 24 13:34:14 np0005533938 systemd[169931]: Starting Create User's Volatile Files and Directories...
Nov 24 13:34:14 np0005533938 systemd[169931]: Listening on D-Bus User Message Bus Socket.
Nov 24 13:34:14 np0005533938 systemd[169931]: Finished Create User's Volatile Files and Directories.
Nov 24 13:34:14 np0005533938 systemd[169931]: Reached target Sockets.
Nov 24 13:34:14 np0005533938 systemd[169931]: Reached target Basic System.
Nov 24 13:34:14 np0005533938 systemd[169931]: Reached target Main User Target.
Nov 24 13:34:14 np0005533938 systemd[169931]: Startup finished in 153ms.
Nov 24 13:34:14 np0005533938 systemd[1]: Started User Manager for UID 0.
Nov 24 13:34:14 np0005533938 systemd[1]: Started ovn_controller container.
Nov 24 13:34:14 np0005533938 systemd[1]: Started Session c1 of User root.
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: INFO:__main__:Validating config file
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: INFO:__main__:Writing out command to execute
Nov 24 13:34:15 np0005533938 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: ++ cat /run_command
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: + ARGS=
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: + sudo kolla_copy_cacerts
Nov 24 13:34:15 np0005533938 systemd[1]: Started Session c2 of User root.
Nov 24 13:34:15 np0005533938 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: + [[ ! -n '' ]]
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: + . kolla_extend_start
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: + umask 0022
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 24 13:34:15 np0005533938 NetworkManager[48851]: <info>  [1764009255.1549] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 24 13:34:15 np0005533938 NetworkManager[48851]: <info>  [1764009255.1556] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 24 13:34:15 np0005533938 NetworkManager[48851]: <info>  [1764009255.1567] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 24 13:34:15 np0005533938 NetworkManager[48851]: <info>  [1764009255.1572] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 24 13:34:15 np0005533938 NetworkManager[48851]: <info>  [1764009255.1576] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 24 13:34:15 np0005533938 kernel: br-int: entered promiscuous mode
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 24 13:34:15 np0005533938 systemd-udevd[170024]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 13:34:15 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 13:34:15 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:15Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 24 13:34:15 np0005533938 NetworkManager[48851]: <info>  [1764009255.2960] manager: (ovn-931e5e-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 24 13:34:15 np0005533938 kernel: genev_sys_6081: entered promiscuous mode
Nov 24 13:34:15 np0005533938 systemd-udevd[170026]: Network interface NamePolicy= disabled on kernel command line.
Nov 24 13:34:15 np0005533938 NetworkManager[48851]: <info>  [1764009255.3233] device (genev_sys_6081): carrier: link connected
Nov 24 13:34:15 np0005533938 NetworkManager[48851]: <info>  [1764009255.3236] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 24 13:34:15 np0005533938 python3.9[170156]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:34:15 np0005533938 ovs-vsctl[170157]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 24 13:34:16 np0005533938 python3.9[170310]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:34:16 np0005533938 ovs-vsctl[170312]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 24 13:34:17 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:17 np0005533938 python3.9[170465]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:34:17 np0005533938 ovs-vsctl[170466]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 24 13:34:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:34:17 np0005533938 systemd[1]: session-48.scope: Deactivated successfully.
Nov 24 13:34:17 np0005533938 systemd[1]: session-48.scope: Consumed 56.448s CPU time.
Nov 24 13:34:17 np0005533938 systemd-logind[822]: Session 48 logged out. Waiting for processes to exit.
Nov 24 13:34:17 np0005533938 systemd-logind[822]: Removed session 48.
Nov 24 13:34:19 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:34:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:34:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:34:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:34:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:34:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:34:19 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 09394413-e15c-4775-8e6b-b00d248be74d does not exist
Nov 24 13:34:19 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev f045bbd7-1f35-41f8-a079-87b6455dcac1 does not exist
Nov 24 13:34:19 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev a69aa5c2-74d6-4547-bf67-32847d327369 does not exist
Nov 24 13:34:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:34:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:34:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:34:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:34:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:34:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:34:20 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:34:20 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:34:20 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:34:20 np0005533938 podman[170762]: 2025-11-24 18:34:20.548314522 +0000 UTC m=+0.070353064 container create 5289f896b0bcfd5d291c0e4eab84e7e0f1b1aed42d5fc060ce80ba6e6adf0203 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:34:20 np0005533938 podman[170762]: 2025-11-24 18:34:20.498332139 +0000 UTC m=+0.020370701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:34:20 np0005533938 systemd[1]: Started libpod-conmon-5289f896b0bcfd5d291c0e4eab84e7e0f1b1aed42d5fc060ce80ba6e6adf0203.scope.
Nov 24 13:34:20 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:34:20 np0005533938 podman[170762]: 2025-11-24 18:34:20.756625815 +0000 UTC m=+0.278664387 container init 5289f896b0bcfd5d291c0e4eab84e7e0f1b1aed42d5fc060ce80ba6e6adf0203 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_einstein, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 13:34:20 np0005533938 podman[170762]: 2025-11-24 18:34:20.767666262 +0000 UTC m=+0.289704804 container start 5289f896b0bcfd5d291c0e4eab84e7e0f1b1aed42d5fc060ce80ba6e6adf0203 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:34:20 np0005533938 podman[170762]: 2025-11-24 18:34:20.77081049 +0000 UTC m=+0.292849042 container attach 5289f896b0bcfd5d291c0e4eab84e7e0f1b1aed42d5fc060ce80ba6e6adf0203 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_einstein, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:34:20 np0005533938 vigorous_einstein[170776]: 167 167
Nov 24 13:34:20 np0005533938 systemd[1]: libpod-5289f896b0bcfd5d291c0e4eab84e7e0f1b1aed42d5fc060ce80ba6e6adf0203.scope: Deactivated successfully.
Nov 24 13:34:20 np0005533938 podman[170762]: 2025-11-24 18:34:20.773780755 +0000 UTC m=+0.295819337 container died 5289f896b0bcfd5d291c0e4eab84e7e0f1b1aed42d5fc060ce80ba6e6adf0203 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 24 13:34:20 np0005533938 systemd[1]: var-lib-containers-storage-overlay-7ed873de0b677265f2ef3754e8cfb95ae3d8b1ac49fb43584d6a71fe74407b4e-merged.mount: Deactivated successfully.
Nov 24 13:34:20 np0005533938 podman[170762]: 2025-11-24 18:34:20.817279555 +0000 UTC m=+0.339318097 container remove 5289f896b0bcfd5d291c0e4eab84e7e0f1b1aed42d5fc060ce80ba6e6adf0203 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_einstein, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 13:34:20 np0005533938 systemd[1]: libpod-conmon-5289f896b0bcfd5d291c0e4eab84e7e0f1b1aed42d5fc060ce80ba6e6adf0203.scope: Deactivated successfully.
Nov 24 13:34:20 np0005533938 podman[170802]: 2025-11-24 18:34:20.989315238 +0000 UTC m=+0.043100681 container create 10e217e099d59753855e0391c5f605a2c5b7806fa48a1daf963c964bce9b2f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 13:34:21 np0005533938 systemd[1]: Started libpod-conmon-10e217e099d59753855e0391c5f605a2c5b7806fa48a1daf963c964bce9b2f5b.scope.
Nov 24 13:34:21 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:34:21 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cfb1a84c9f79d4189214b3f31c81f4ed4a1b349db3b706daaf941efe82c61a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:34:21 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cfb1a84c9f79d4189214b3f31c81f4ed4a1b349db3b706daaf941efe82c61a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:34:21 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cfb1a84c9f79d4189214b3f31c81f4ed4a1b349db3b706daaf941efe82c61a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:34:21 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cfb1a84c9f79d4189214b3f31c81f4ed4a1b349db3b706daaf941efe82c61a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:34:21 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cfb1a84c9f79d4189214b3f31c81f4ed4a1b349db3b706daaf941efe82c61a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:34:21 np0005533938 podman[170802]: 2025-11-24 18:34:21.061300563 +0000 UTC m=+0.115085996 container init 10e217e099d59753855e0391c5f605a2c5b7806fa48a1daf963c964bce9b2f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:34:21 np0005533938 podman[170802]: 2025-11-24 18:34:20.972122117 +0000 UTC m=+0.025907540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:34:21 np0005533938 podman[170802]: 2025-11-24 18:34:21.069917219 +0000 UTC m=+0.123702622 container start 10e217e099d59753855e0391c5f605a2c5b7806fa48a1daf963c964bce9b2f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 13:34:21 np0005533938 podman[170802]: 2025-11-24 18:34:21.073197791 +0000 UTC m=+0.126983214 container attach 10e217e099d59753855e0391c5f605a2c5b7806fa48a1daf963c964bce9b2f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Nov 24 13:34:21 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:22 np0005533938 festive_kilby[170818]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:34:22 np0005533938 festive_kilby[170818]: --> relative data size: 1.0
Nov 24 13:34:22 np0005533938 festive_kilby[170818]: --> All data devices are unavailable
Nov 24 13:34:22 np0005533938 systemd[1]: libpod-10e217e099d59753855e0391c5f605a2c5b7806fa48a1daf963c964bce9b2f5b.scope: Deactivated successfully.
Nov 24 13:34:22 np0005533938 systemd[1]: libpod-10e217e099d59753855e0391c5f605a2c5b7806fa48a1daf963c964bce9b2f5b.scope: Consumed 1.027s CPU time.
Nov 24 13:34:22 np0005533938 podman[170802]: 2025-11-24 18:34:22.142540559 +0000 UTC m=+1.196325972 container died 10e217e099d59753855e0391c5f605a2c5b7806fa48a1daf963c964bce9b2f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 13:34:22 np0005533938 systemd[1]: var-lib-containers-storage-overlay-9cfb1a84c9f79d4189214b3f31c81f4ed4a1b349db3b706daaf941efe82c61a8-merged.mount: Deactivated successfully.
Nov 24 13:34:22 np0005533938 podman[170802]: 2025-11-24 18:34:22.206177385 +0000 UTC m=+1.259962778 container remove 10e217e099d59753855e0391c5f605a2c5b7806fa48a1daf963c964bce9b2f5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:34:22 np0005533938 systemd[1]: libpod-conmon-10e217e099d59753855e0391c5f605a2c5b7806fa48a1daf963c964bce9b2f5b.scope: Deactivated successfully.
Nov 24 13:34:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:34:22 np0005533938 podman[170999]: 2025-11-24 18:34:22.775763985 +0000 UTC m=+0.036200049 container create 8330551fd65cdf9717daa997fd4259c1bd4a174f47b8e1a1e5c9b34194c589d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_herschel, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:34:22 np0005533938 systemd[1]: Started libpod-conmon-8330551fd65cdf9717daa997fd4259c1bd4a174f47b8e1a1e5c9b34194c589d9.scope.
Nov 24 13:34:22 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:34:22 np0005533938 podman[170999]: 2025-11-24 18:34:22.84897811 +0000 UTC m=+0.109414184 container init 8330551fd65cdf9717daa997fd4259c1bd4a174f47b8e1a1e5c9b34194c589d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_herschel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 24 13:34:22 np0005533938 podman[170999]: 2025-11-24 18:34:22.853821712 +0000 UTC m=+0.114257766 container start 8330551fd65cdf9717daa997fd4259c1bd4a174f47b8e1a1e5c9b34194c589d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_herschel, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:34:22 np0005533938 podman[170999]: 2025-11-24 18:34:22.759958269 +0000 UTC m=+0.020394353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:34:22 np0005533938 podman[170999]: 2025-11-24 18:34:22.857096604 +0000 UTC m=+0.117532658 container attach 8330551fd65cdf9717daa997fd4259c1bd4a174f47b8e1a1e5c9b34194c589d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 13:34:22 np0005533938 reverent_herschel[171015]: 167 167
Nov 24 13:34:22 np0005533938 systemd[1]: libpod-8330551fd65cdf9717daa997fd4259c1bd4a174f47b8e1a1e5c9b34194c589d9.scope: Deactivated successfully.
Nov 24 13:34:22 np0005533938 podman[170999]: 2025-11-24 18:34:22.859408442 +0000 UTC m=+0.119844486 container died 8330551fd65cdf9717daa997fd4259c1bd4a174f47b8e1a1e5c9b34194c589d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_herschel, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 13:34:22 np0005533938 systemd[1]: var-lib-containers-storage-overlay-0d590cc5543b40d4b26fab2c33c48fe79aab9cd2534c3bbc633c73aaf16ff5b4-merged.mount: Deactivated successfully.
Nov 24 13:34:22 np0005533938 podman[170999]: 2025-11-24 18:34:22.892478911 +0000 UTC m=+0.152915005 container remove 8330551fd65cdf9717daa997fd4259c1bd4a174f47b8e1a1e5c9b34194c589d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_herschel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:34:22 np0005533938 systemd[1]: libpod-conmon-8330551fd65cdf9717daa997fd4259c1bd4a174f47b8e1a1e5c9b34194c589d9.scope: Deactivated successfully.
Nov 24 13:34:23 np0005533938 podman[171038]: 2025-11-24 18:34:23.064176306 +0000 UTC m=+0.043181834 container create ac06c040cb15be6c0c9b0944a2429b2e3eba7e0e45c6b6ad65b0cc4b04fd2df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 13:34:23 np0005533938 systemd[1]: Started libpod-conmon-ac06c040cb15be6c0c9b0944a2429b2e3eba7e0e45c6b6ad65b0cc4b04fd2df2.scope.
Nov 24 13:34:23 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:34:23 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc08636e7a676dc42ac5fc7dfcef6bdcba4ec8149da59bb5213cae54e231929/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:34:23 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc08636e7a676dc42ac5fc7dfcef6bdcba4ec8149da59bb5213cae54e231929/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:34:23 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc08636e7a676dc42ac5fc7dfcef6bdcba4ec8149da59bb5213cae54e231929/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:34:23 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fc08636e7a676dc42ac5fc7dfcef6bdcba4ec8149da59bb5213cae54e231929/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:34:23 np0005533938 podman[171038]: 2025-11-24 18:34:23.043143728 +0000 UTC m=+0.022149296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:34:23 np0005533938 podman[171038]: 2025-11-24 18:34:23.138663073 +0000 UTC m=+0.117668611 container init ac06c040cb15be6c0c9b0944a2429b2e3eba7e0e45c6b6ad65b0cc4b04fd2df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 13:34:23 np0005533938 podman[171038]: 2025-11-24 18:34:23.144189491 +0000 UTC m=+0.123194999 container start ac06c040cb15be6c0c9b0944a2429b2e3eba7e0e45c6b6ad65b0cc4b04fd2df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:34:23 np0005533938 podman[171038]: 2025-11-24 18:34:23.146735575 +0000 UTC m=+0.125741093 container attach ac06c040cb15be6c0c9b0944a2429b2e3eba7e0e45c6b6ad65b0cc4b04fd2df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:34:23 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:23 np0005533938 systemd-logind[822]: New session 50 of user zuul.
Nov 24 13:34:23 np0005533938 systemd[1]: Started Session 50 of User zuul.
Nov 24 13:34:23 np0005533938 interesting_curie[171054]: {
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:    "0": [
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:        {
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "devices": [
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "/dev/loop3"
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            ],
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "lv_name": "ceph_lv0",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "lv_size": "21470642176",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "name": "ceph_lv0",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "tags": {
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.cluster_name": "ceph",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.crush_device_class": "",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.encrypted": "0",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.osd_id": "0",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.type": "block",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.vdo": "0"
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            },
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "type": "block",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "vg_name": "ceph_vg0"
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:        }
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:    ],
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:    "1": [
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:        {
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "devices": [
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "/dev/loop4"
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            ],
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "lv_name": "ceph_lv1",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "lv_size": "21470642176",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "name": "ceph_lv1",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "tags": {
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.cluster_name": "ceph",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.crush_device_class": "",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.encrypted": "0",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.osd_id": "1",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.type": "block",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.vdo": "0"
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            },
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "type": "block",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "vg_name": "ceph_vg1"
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:        }
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:    ],
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:    "2": [
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:        {
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "devices": [
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "/dev/loop5"
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            ],
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "lv_name": "ceph_lv2",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "lv_size": "21470642176",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "name": "ceph_lv2",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "tags": {
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.cluster_name": "ceph",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.crush_device_class": "",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.encrypted": "0",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.osd_id": "2",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.type": "block",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:                "ceph.vdo": "0"
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            },
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "type": "block",
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:            "vg_name": "ceph_vg2"
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:        }
Nov 24 13:34:23 np0005533938 interesting_curie[171054]:    ]
Nov 24 13:34:23 np0005533938 interesting_curie[171054]: }
Nov 24 13:34:23 np0005533938 systemd[1]: libpod-ac06c040cb15be6c0c9b0944a2429b2e3eba7e0e45c6b6ad65b0cc4b04fd2df2.scope: Deactivated successfully.
Nov 24 13:34:23 np0005533938 podman[171038]: 2025-11-24 18:34:23.919359126 +0000 UTC m=+0.898364644 container died ac06c040cb15be6c0c9b0944a2429b2e3eba7e0e45c6b6ad65b0cc4b04fd2df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curie, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:34:23 np0005533938 systemd[1]: var-lib-containers-storage-overlay-4fc08636e7a676dc42ac5fc7dfcef6bdcba4ec8149da59bb5213cae54e231929-merged.mount: Deactivated successfully.
Nov 24 13:34:23 np0005533938 podman[171038]: 2025-11-24 18:34:23.975631706 +0000 UTC m=+0.954637224 container remove ac06c040cb15be6c0c9b0944a2429b2e3eba7e0e45c6b6ad65b0cc4b04fd2df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curie, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 13:34:23 np0005533938 systemd[1]: libpod-conmon-ac06c040cb15be6c0c9b0944a2429b2e3eba7e0e45c6b6ad65b0cc4b04fd2df2.scope: Deactivated successfully.
Nov 24 13:34:24 np0005533938 python3.9[171255]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:34:24 np0005533938 podman[171373]: 2025-11-24 18:34:24.495633953 +0000 UTC m=+0.035007419 container create 7965a8de23f34c4987341404fad7219283ccf7bd083937e14488fe41a634b444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 13:34:24 np0005533938 systemd[1]: Started libpod-conmon-7965a8de23f34c4987341404fad7219283ccf7bd083937e14488fe41a634b444.scope.
Nov 24 13:34:24 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:34:24 np0005533938 podman[171373]: 2025-11-24 18:34:24.57168427 +0000 UTC m=+0.111057736 container init 7965a8de23f34c4987341404fad7219283ccf7bd083937e14488fe41a634b444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lumiere, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 24 13:34:24 np0005533938 podman[171373]: 2025-11-24 18:34:24.479656182 +0000 UTC m=+0.019029668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:34:24 np0005533938 podman[171373]: 2025-11-24 18:34:24.578522171 +0000 UTC m=+0.117895637 container start 7965a8de23f34c4987341404fad7219283ccf7bd083937e14488fe41a634b444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:34:24 np0005533938 podman[171373]: 2025-11-24 18:34:24.581339722 +0000 UTC m=+0.120713208 container attach 7965a8de23f34c4987341404fad7219283ccf7bd083937e14488fe41a634b444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 13:34:24 np0005533938 happy_lumiere[171389]: 167 167
Nov 24 13:34:24 np0005533938 systemd[1]: libpod-7965a8de23f34c4987341404fad7219283ccf7bd083937e14488fe41a634b444.scope: Deactivated successfully.
Nov 24 13:34:24 np0005533938 podman[171373]: 2025-11-24 18:34:24.583602799 +0000 UTC m=+0.122976265 container died 7965a8de23f34c4987341404fad7219283ccf7bd083937e14488fe41a634b444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 13:34:24 np0005533938 systemd[1]: var-lib-containers-storage-overlay-794876e19050cc03d667631f1afc227f2ae79d25977a8677fe9cec7a71de7e43-merged.mount: Deactivated successfully.
Nov 24 13:34:24 np0005533938 podman[171373]: 2025-11-24 18:34:24.614713779 +0000 UTC m=+0.154087235 container remove 7965a8de23f34c4987341404fad7219283ccf7bd083937e14488fe41a634b444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lumiere, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:34:24 np0005533938 systemd[1]: libpod-conmon-7965a8de23f34c4987341404fad7219283ccf7bd083937e14488fe41a634b444.scope: Deactivated successfully.
Nov 24 13:34:24 np0005533938 podman[171436]: 2025-11-24 18:34:24.760886513 +0000 UTC m=+0.039303866 container create e4d3819c1d3abf03908cec919b83824b54ce77b305b6b195860862fcf2a7682c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 13:34:24 np0005533938 systemd[1]: Started libpod-conmon-e4d3819c1d3abf03908cec919b83824b54ce77b305b6b195860862fcf2a7682c.scope.
Nov 24 13:34:24 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:34:24 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2da7d3f628998e22320d7492a6c16ce37ed113f9be128c7b64929dd5a6aff5f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:34:24 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2da7d3f628998e22320d7492a6c16ce37ed113f9be128c7b64929dd5a6aff5f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:34:24 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2da7d3f628998e22320d7492a6c16ce37ed113f9be128c7b64929dd5a6aff5f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:34:24 np0005533938 podman[171436]: 2025-11-24 18:34:24.743944908 +0000 UTC m=+0.022362281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:34:24 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2da7d3f628998e22320d7492a6c16ce37ed113f9be128c7b64929dd5a6aff5f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:34:24 np0005533938 podman[171436]: 2025-11-24 18:34:24.847263848 +0000 UTC m=+0.125681261 container init e4d3819c1d3abf03908cec919b83824b54ce77b305b6b195860862fcf2a7682c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_feynman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 13:34:24 np0005533938 podman[171436]: 2025-11-24 18:34:24.854506259 +0000 UTC m=+0.132923612 container start e4d3819c1d3abf03908cec919b83824b54ce77b305b6b195860862fcf2a7682c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:34:24 np0005533938 podman[171436]: 2025-11-24 18:34:24.857576106 +0000 UTC m=+0.135993459 container attach e4d3819c1d3abf03908cec919b83824b54ce77b305b6b195860862fcf2a7682c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:34:25 np0005533938 systemd[1]: Stopping User Manager for UID 0...
Nov 24 13:34:25 np0005533938 systemd[169931]: Activating special unit Exit the Session...
Nov 24 13:34:25 np0005533938 systemd[169931]: Stopped target Main User Target.
Nov 24 13:34:25 np0005533938 systemd[169931]: Stopped target Basic System.
Nov 24 13:34:25 np0005533938 systemd[169931]: Stopped target Paths.
Nov 24 13:34:25 np0005533938 systemd[169931]: Stopped target Sockets.
Nov 24 13:34:25 np0005533938 systemd[169931]: Stopped target Timers.
Nov 24 13:34:25 np0005533938 systemd[169931]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 24 13:34:25 np0005533938 systemd[169931]: Closed D-Bus User Message Bus Socket.
Nov 24 13:34:25 np0005533938 systemd[169931]: Stopped Create User's Volatile Files and Directories.
Nov 24 13:34:25 np0005533938 systemd[169931]: Removed slice User Application Slice.
Nov 24 13:34:25 np0005533938 systemd[169931]: Reached target Shutdown.
Nov 24 13:34:25 np0005533938 systemd[169931]: Finished Exit the Session.
Nov 24 13:34:25 np0005533938 systemd[169931]: Reached target Exit the Session.
Nov 24 13:34:25 np0005533938 systemd[1]: user@0.service: Deactivated successfully.
Nov 24 13:34:25 np0005533938 systemd[1]: Stopped User Manager for UID 0.
Nov 24 13:34:25 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:25 np0005533938 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 24 13:34:25 np0005533938 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 24 13:34:25 np0005533938 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 24 13:34:25 np0005533938 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 24 13:34:25 np0005533938 systemd[1]: Removed slice User Slice of UID 0.
Nov 24 13:34:25 np0005533938 python3.9[171588]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:34:25 np0005533938 admiring_feynman[171460]: {
Nov 24 13:34:25 np0005533938 admiring_feynman[171460]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:34:25 np0005533938 admiring_feynman[171460]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:34:25 np0005533938 admiring_feynman[171460]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:34:25 np0005533938 admiring_feynman[171460]:        "osd_id": 0,
Nov 24 13:34:25 np0005533938 admiring_feynman[171460]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:34:25 np0005533938 admiring_feynman[171460]:        "type": "bluestore"
Nov 24 13:34:25 np0005533938 admiring_feynman[171460]:    },
Nov 24 13:34:25 np0005533938 admiring_feynman[171460]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:34:25 np0005533938 admiring_feynman[171460]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:34:25 np0005533938 admiring_feynman[171460]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:34:25 np0005533938 admiring_feynman[171460]:        "osd_id": 1,
Nov 24 13:34:25 np0005533938 admiring_feynman[171460]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:34:25 np0005533938 admiring_feynman[171460]:        "type": "bluestore"
Nov 24 13:34:25 np0005533938 admiring_feynman[171460]:    },
Nov 24 13:34:25 np0005533938 admiring_feynman[171460]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:34:25 np0005533938 admiring_feynman[171460]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:34:25 np0005533938 admiring_feynman[171460]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:34:25 np0005533938 admiring_feynman[171460]:        "osd_id": 2,
Nov 24 13:34:25 np0005533938 admiring_feynman[171460]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:34:25 np0005533938 admiring_feynman[171460]:        "type": "bluestore"
Nov 24 13:34:25 np0005533938 admiring_feynman[171460]:    }
Nov 24 13:34:25 np0005533938 admiring_feynman[171460]: }
Nov 24 13:34:25 np0005533938 systemd[1]: libpod-e4d3819c1d3abf03908cec919b83824b54ce77b305b6b195860862fcf2a7682c.scope: Deactivated successfully.
Nov 24 13:34:25 np0005533938 systemd[1]: libpod-e4d3819c1d3abf03908cec919b83824b54ce77b305b6b195860862fcf2a7682c.scope: Consumed 1.033s CPU time.
Nov 24 13:34:25 np0005533938 podman[171436]: 2025-11-24 18:34:25.901430296 +0000 UTC m=+1.179847679 container died e4d3819c1d3abf03908cec919b83824b54ce77b305b6b195860862fcf2a7682c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 13:34:26 np0005533938 systemd[1]: var-lib-containers-storage-overlay-2da7d3f628998e22320d7492a6c16ce37ed113f9be128c7b64929dd5a6aff5f4-merged.mount: Deactivated successfully.
Nov 24 13:34:26 np0005533938 podman[171436]: 2025-11-24 18:34:26.849175136 +0000 UTC m=+2.127592489 container remove e4d3819c1d3abf03908cec919b83824b54ce77b305b6b195860862fcf2a7682c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_feynman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:34:26 np0005533938 systemd[1]: libpod-conmon-e4d3819c1d3abf03908cec919b83824b54ce77b305b6b195860862fcf2a7682c.scope: Deactivated successfully.
Nov 24 13:34:26 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:34:26 np0005533938 python3.9[171797]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:34:26 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:34:26 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:34:26 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:34:26 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev b69304bb-166b-4453-9384-8c965b360dcb does not exist
Nov 24 13:34:26 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 6d70bf40-0e5c-429f-811e-7faaecd6b75b does not exist
Nov 24 13:34:27 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:27 np0005533938 python3.9[172000]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:34:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:34:27 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:34:27 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:34:28 np0005533938 python3.9[172152]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:34:28 np0005533938 python3.9[172304]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:34:29 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:29 np0005533938 python3.9[172454]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:34:30 np0005533938 python3.9[172606]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 24 13:34:31 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:31 np0005533938 python3.9[172756]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:34:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:34:32 np0005533938 python3.9[172877]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009271.267652-86-268222752231914/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:34:33 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:33 np0005533938 python3.9[173027]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:34:34 np0005533938 python3.9[173149]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009272.8752599-101-243795067466894/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:34:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:34:34
Nov 24 13:34:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:34:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:34:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', '.rgw.root', 'images', 'vms', 'default.rgw.control', 'volumes', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta']
Nov 24 13:34:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:34:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:34:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:34:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:34:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:34:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:34:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:34:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:34:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:34:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:34:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:34:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:34:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:34:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:34:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:34:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:34:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:34:34 np0005533938 python3.9[173301]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 13:34:35 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:35 np0005533938 ceph-mgr[75218]: client.0 ms_handle_reset on v2:192.168.122.100:6800/536471675
Nov 24 13:34:35 np0005533938 python3.9[173385]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:34:37 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:34:38 np0005533938 python3.9[173538]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 13:34:39 np0005533938 python3.9[173691]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:34:39 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:39 np0005533938 python3.9[173812]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009278.6302657-138-211746784057465/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:34:40 np0005533938 python3.9[173962]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:34:41 np0005533938 python3.9[174083]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009279.9707673-138-92510014303264/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:34:41 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:42 np0005533938 python3.9[174233]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:34:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:34:43 np0005533938 python3.9[174354]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009282.0292325-182-96257055751031/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:34:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:34:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:34:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:34:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:34:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:34:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:34:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:34:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:34:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:34:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:34:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:34:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:34:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:34:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:34:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:34:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:34:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:34:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:34:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:34:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:34:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:34:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:34:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:34:43 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:43 np0005533938 python3.9[174504]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:34:44 np0005533938 python3.9[174625]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009283.1474743-182-227912343828892/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:34:44 np0005533938 python3.9[174775]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:34:44 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:44Z|00025|memory|INFO|16256 kB peak resident set size after 29.8 seconds
Nov 24 13:34:44 np0005533938 ovn_controller[169892]: 2025-11-24T18:34:44Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 24 13:34:44 np0005533938 podman[174776]: 2025-11-24 18:34:44.993821472 +0000 UTC m=+0.088347661 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 13:34:45 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:45 np0005533938 python3.9[174953]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:34:46 np0005533938 python3.9[175105]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:34:46 np0005533938 python3.9[175183]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:34:47 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:47 np0005533938 python3.9[175335]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:34:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:34:47 np0005533938 python3.9[175413]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:34:48 np0005533938 python3.9[175565]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:34:49 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:49 np0005533938 python3.9[175717]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:34:49 np0005533938 python3.9[175795]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:34:50 np0005533938 python3.9[175947]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:34:50 np0005533938 python3.9[176025]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:34:51 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:51 np0005533938 python3.9[176177]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:34:51 np0005533938 systemd[1]: Reloading.
Nov 24 13:34:51 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:34:51 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:34:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:34:52 np0005533938 python3.9[176367]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:34:53 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:53 np0005533938 python3.9[176445]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:34:54 np0005533938 python3.9[176597]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:34:54 np0005533938 python3.9[176675]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:34:54 np0005533938 auditd[701]: Audit daemon rotating log files
Nov 24 13:34:55 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:55 np0005533938 python3.9[176827]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:34:55 np0005533938 systemd[1]: Reloading.
Nov 24 13:34:55 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:34:55 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:34:55 np0005533938 systemd[1]: Starting Create netns directory...
Nov 24 13:34:55 np0005533938 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 13:34:55 np0005533938 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 13:34:55 np0005533938 systemd[1]: Finished Create netns directory.
Nov 24 13:34:56 np0005533938 python3.9[177021]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:34:57 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:57 np0005533938 python3.9[177174]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:34:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:34:58 np0005533938 python3.9[177298]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009296.8196585-333-179039905916061/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:34:59 np0005533938 python3.9[177450]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:34:59 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:34:59 np0005533938 python3.9[177602]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:35:00 np0005533938 python3.9[177725]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009299.334048-358-155557701270394/.source.json _original_basename=.dz3adk1y follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:35:01 np0005533938 python3.9[177877]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:35:01 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:35:03 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:03 np0005533938 python3.9[178304]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 24 13:35:04 np0005533938 python3.9[178456]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 13:35:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:35:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:35:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:35:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:35:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:35:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:35:05 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:05 np0005533938 python3.9[178608]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 24 13:35:07 np0005533938 python3[178786]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 13:35:07 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:35:09 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:11 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:35:13 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:14 np0005533938 podman[178798]: 2025-11-24 18:35:14.855966456 +0000 UTC m=+7.745127479 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 13:35:15 np0005533938 podman[178918]: 2025-11-24 18:35:15.06893954 +0000 UTC m=+0.059304074 container create 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 24 13:35:15 np0005533938 podman[178918]: 2025-11-24 18:35:15.037057144 +0000 UTC m=+0.027421728 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 13:35:15 np0005533938 python3[178786]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 24 13:35:15 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:15 np0005533938 podman[179080]: 2025-11-24 18:35:15.956714629 +0000 UTC m=+0.172288463 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 24 13:35:16 np0005533938 python3.9[179121]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:35:16 np0005533938 python3.9[179289]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:35:17 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:17 np0005533938 python3.9[179365]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:35:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:35:18 np0005533938 python3.9[179516]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764009317.4370267-446-3678531132004/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:35:18 np0005533938 python3.9[179592]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 13:35:18 np0005533938 systemd[1]: Reloading.
Nov 24 13:35:18 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:35:18 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:35:19 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:19 np0005533938 python3.9[179704]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:35:19 np0005533938 systemd[1]: Reloading.
Nov 24 13:35:19 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:35:19 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:35:19 np0005533938 systemd[1]: Starting ovn_metadata_agent container...
Nov 24 13:35:20 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:35:20 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fea43c7fdcc53abb769fc1d07f729dd8d9aeebb3386c60b0f057da8ac7da108/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 24 13:35:20 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fea43c7fdcc53abb769fc1d07f729dd8d9aeebb3386c60b0f057da8ac7da108/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 24 13:35:20 np0005533938 systemd[1]: Started /usr/bin/podman healthcheck run 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf.
Nov 24 13:35:20 np0005533938 podman[179744]: 2025-11-24 18:35:20.354382001 +0000 UTC m=+0.401691994 container init 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: + sudo -E kolla_set_configs
Nov 24 13:35:20 np0005533938 podman[179744]: 2025-11-24 18:35:20.38638467 +0000 UTC m=+0.433694593 container start 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 13:35:20 np0005533938 edpm-start-podman-container[179744]: ovn_metadata_agent
Nov 24 13:35:20 np0005533938 edpm-start-podman-container[179743]: Creating additional drop-in dependency for "ovn_metadata_agent" (016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf)
Nov 24 13:35:20 np0005533938 podman[179765]: 2025-11-24 18:35:20.63801703 +0000 UTC m=+0.243797247 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 13:35:20 np0005533938 systemd[1]: Reloading.
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: INFO:__main__:Validating config file
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: INFO:__main__:Copying service configuration files
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: INFO:__main__:Writing out command to execute
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 24 13:35:20 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:35:20 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: ++ cat /run_command
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: + CMD=neutron-ovn-metadata-agent
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: + ARGS=
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: + sudo kolla_copy_cacerts
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: + [[ ! -n '' ]]
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: + . kolla_extend_start
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: Running command: 'neutron-ovn-metadata-agent'
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: + umask 0022
Nov 24 13:35:20 np0005533938 ovn_metadata_agent[179758]: + exec neutron-ovn-metadata-agent
Nov 24 13:35:20 np0005533938 systemd[1]: Started ovn_metadata_agent container.
Nov 24 13:35:21 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:21 np0005533938 systemd[1]: session-50.scope: Deactivated successfully.
Nov 24 13:35:21 np0005533938 systemd[1]: session-50.scope: Consumed 54.354s CPU time.
Nov 24 13:35:21 np0005533938 systemd-logind[822]: Session 50 logged out. Waiting for processes to exit.
Nov 24 13:35:21 np0005533938 systemd-logind[822]: Removed session 50.
Nov 24 13:35:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.675 179763 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.676 179763 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.677 179763 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.677 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.677 179763 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.677 179763 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.678 179763 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.678 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.678 179763 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.678 179763 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.678 179763 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.678 179763 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.679 179763 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.679 179763 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.679 179763 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.679 179763 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.679 179763 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.679 179763 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.680 179763 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.680 179763 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.680 179763 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.680 179763 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.680 179763 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.680 179763 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.680 179763 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.681 179763 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.681 179763 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.681 179763 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.681 179763 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.681 179763 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.681 179763 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.682 179763 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.682 179763 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.682 179763 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.682 179763 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.682 179763 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.682 179763 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.682 179763 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.683 179763 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.683 179763 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.683 179763 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.684 179763 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.684 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.684 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.684 179763 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.684 179763 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.684 179763 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.685 179763 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.685 179763 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.685 179763 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.685 179763 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.685 179763 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.685 179763 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.686 179763 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.686 179763 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.686 179763 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.686 179763 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.686 179763 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.686 179763 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.686 179763 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.687 179763 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.687 179763 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.687 179763 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.687 179763 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.687 179763 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.688 179763 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.688 179763 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.688 179763 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.688 179763 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.688 179763 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.689 179763 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.689 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.689 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.689 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.689 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.689 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.690 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.690 179763 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.690 179763 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.690 179763 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.690 179763 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.690 179763 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.691 179763 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.691 179763 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.691 179763 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.691 179763 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.691 179763 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.692 179763 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.692 179763 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.692 179763 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.692 179763 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.692 179763 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.692 179763 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.692 179763 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.693 179763 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.693 179763 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.693 179763 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.693 179763 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.693 179763 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.693 179763 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.693 179763 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.693 179763 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.694 179763 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.694 179763 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.694 179763 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.694 179763 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.694 179763 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.694 179763 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.695 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.695 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.695 179763 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.695 179763 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.695 179763 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.695 179763 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.696 179763 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.696 179763 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.696 179763 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.696 179763 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.696 179763 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.697 179763 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.697 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.697 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.697 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.697 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.697 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.697 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.698 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.698 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.698 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.698 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.698 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.698 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.698 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.699 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.699 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.699 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.699 179763 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.699 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.699 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.700 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.700 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.700 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.700 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.700 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.700 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.700 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.701 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.701 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.701 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.701 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.701 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.701 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.701 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.702 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.702 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.702 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.702 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.702 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.702 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.702 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.703 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.703 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.703 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.703 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.703 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.703 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.704 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.704 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.704 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.704 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.704 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.704 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.704 179763 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.705 179763 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.705 179763 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.705 179763 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.705 179763 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.705 179763 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.705 179763 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.706 179763 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.706 179763 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.706 179763 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.706 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.706 179763 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.706 179763 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.707 179763 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.707 179763 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.707 179763 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.707 179763 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.707 179763 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.707 179763 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.707 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.708 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.708 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.708 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.708 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.708 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.708 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.708 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.709 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.709 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.709 179763 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.709 179763 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.709 179763 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.709 179763 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.710 179763 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.710 179763 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.710 179763 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.710 179763 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.710 179763 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.710 179763 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.711 179763 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.711 179763 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.711 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.711 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.711 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.711 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.712 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.712 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.712 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.712 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.712 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.712 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.713 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.713 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.713 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.713 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.713 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.713 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.713 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.714 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.714 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.714 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.714 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.714 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.714 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.714 179763 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.715 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.715 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.715 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.715 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.715 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.716 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.716 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.716 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.716 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.716 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.716 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.717 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.717 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.717 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.717 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.717 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.718 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.718 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.718 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.718 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.718 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.718 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.719 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.719 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.719 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.719 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.719 179763 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.719 179763 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.719 179763 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.720 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.720 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.720 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.720 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.720 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.720 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.720 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.721 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.721 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.721 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.721 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.721 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.721 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.721 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.722 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.722 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.722 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.722 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.722 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.722 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.722 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.723 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.723 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.723 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.723 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.723 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.724 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.724 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.724 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.724 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.724 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.724 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.725 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.725 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.725 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.725 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.725 179763 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.725 179763 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.735 179763 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.735 179763 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.735 179763 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.736 179763 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.736 179763 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.750 179763 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 302e9f34-0427-4ff9-a29b-2fc7b5250666 (UUID: 302e9f34-0427-4ff9-a29b-2fc7b5250666) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.778 179763 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.778 179763 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.779 179763 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.779 179763 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.782 179763 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.789 179763 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.795 179763 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '302e9f34-0427-4ff9-a29b-2fc7b5250666'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7ff10f88f880>], external_ids={}, name=302e9f34-0427-4ff9-a29b-2fc7b5250666, nb_cfg_timestamp=1764009263291, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.796 179763 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7ff10f88fb20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.797 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.797 179763 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.798 179763 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.798 179763 INFO oslo_service.service [-] Starting 1 workers#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.802 179763 DEBUG oslo_service.service [-] Started child 179867 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.806 179763 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpzjm8w9ae/privsep.sock']#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.806 179867 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-232090'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.834 179867 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.835 179867 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.835 179867 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.839 179867 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.846 179867 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 24 13:35:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:22.853 179867 INFO eventlet.wsgi.server [-] (179867) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Nov 24 13:35:23 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:23 np0005533938 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 24 13:35:23 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:23.484 179763 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 24 13:35:23 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:23.485 179763 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpzjm8w9ae/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 24 13:35:23 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:23.362 179872 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 24 13:35:23 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:23.367 179872 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 24 13:35:23 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:23.368 179872 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 24 13:35:23 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:23.369 179872 INFO oslo.privsep.daemon [-] privsep daemon running as pid 179872#033[00m
Nov 24 13:35:23 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:23.488 179872 DEBUG oslo.privsep.daemon [-] privsep: reply[ec2670db-2ec1-479b-a468-f74e6ab5802f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 13:35:23 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:23.932 179872 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:35:23 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:23.933 179872 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:35:23 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:23.933 179872 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.423 179872 DEBUG oslo.privsep.daemon [-] privsep: reply[d8bd70c4-f519-4e09-8b45-70e17bf08459]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.425 179763 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=302e9f34-0427-4ff9-a29b-2fc7b5250666, column=external_ids, values=({'neutron:ovn-metadata-id': 'b0697a09-6663-5123-a0f9-534f577dc986'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.438 179763 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=302e9f34-0427-4ff9-a29b-2fc7b5250666, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.446 179763 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.447 179763 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.447 179763 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.447 179763 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.447 179763 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.447 179763 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.447 179763 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.447 179763 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.448 179763 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.448 179763 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.448 179763 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.448 179763 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.448 179763 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.448 179763 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.449 179763 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.449 179763 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.449 179763 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.449 179763 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.449 179763 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.449 179763 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.449 179763 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.450 179763 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.450 179763 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.450 179763 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.450 179763 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.450 179763 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.450 179763 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.451 179763 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.451 179763 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.451 179763 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.451 179763 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.451 179763 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.451 179763 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.451 179763 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.452 179763 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.452 179763 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.452 179763 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.452 179763 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.452 179763 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.452 179763 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.453 179763 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.453 179763 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.453 179763 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.453 179763 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.453 179763 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.453 179763 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.453 179763 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.454 179763 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.454 179763 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.454 179763 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.454 179763 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.454 179763 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.454 179763 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.455 179763 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.455 179763 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.455 179763 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.455 179763 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.455 179763 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.455 179763 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.455 179763 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.456 179763 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.456 179763 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.456 179763 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.456 179763 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.456 179763 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.456 179763 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.457 179763 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.457 179763 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.457 179763 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.457 179763 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.457 179763 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.457 179763 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.457 179763 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.458 179763 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.458 179763 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.458 179763 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.458 179763 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.458 179763 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.458 179763 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.459 179763 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.459 179763 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.459 179763 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.459 179763 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.459 179763 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.459 179763 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.459 179763 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.460 179763 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.460 179763 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.460 179763 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.460 179763 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.460 179763 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.460 179763 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.460 179763 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.461 179763 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.461 179763 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.461 179763 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.461 179763 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.461 179763 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.461 179763 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.461 179763 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.461 179763 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.462 179763 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.462 179763 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.462 179763 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.462 179763 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.462 179763 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.462 179763 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.463 179763 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.463 179763 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.463 179763 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.463 179763 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.463 179763 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.463 179763 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.464 179763 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.464 179763 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.464 179763 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.464 179763 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.464 179763 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.464 179763 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.464 179763 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.465 179763 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.465 179763 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.465 179763 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.465 179763 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.465 179763 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.465 179763 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.466 179763 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.466 179763 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.466 179763 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.466 179763 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.466 179763 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.466 179763 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.467 179763 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.467 179763 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.467 179763 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.467 179763 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.467 179763 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.467 179763 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.467 179763 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.468 179763 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.468 179763 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.468 179763 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.468 179763 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.468 179763 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.468 179763 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.469 179763 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.469 179763 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.469 179763 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.469 179763 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.469 179763 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.469 179763 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.469 179763 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.470 179763 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.470 179763 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.470 179763 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.470 179763 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.470 179763 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.470 179763 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.471 179763 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.471 179763 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.471 179763 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.471 179763 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.471 179763 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.472 179763 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.472 179763 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.472 179763 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.472 179763 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.472 179763 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.472 179763 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.472 179763 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.473 179763 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.473 179763 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.473 179763 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.473 179763 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.473 179763 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.473 179763 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.473 179763 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.474 179763 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.474 179763 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.474 179763 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.474 179763 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.474 179763 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.474 179763 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.475 179763 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.475 179763 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.475 179763 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.475 179763 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.475 179763 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.475 179763 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.476 179763 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.476 179763 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.476 179763 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.476 179763 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.476 179763 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.476 179763 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.476 179763 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.477 179763 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.477 179763 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.477 179763 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.477 179763 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.477 179763 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.477 179763 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.477 179763 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.478 179763 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.478 179763 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.478 179763 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.478 179763 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.478 179763 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.478 179763 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.478 179763 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.479 179763 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.479 179763 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.479 179763 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.479 179763 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.479 179763 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.479 179763 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.479 179763 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.480 179763 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.480 179763 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.480 179763 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.480 179763 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.480 179763 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.480 179763 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.480 179763 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.481 179763 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.481 179763 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.481 179763 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.481 179763 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.481 179763 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.481 179763 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.482 179763 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.482 179763 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.482 179763 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.482 179763 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.482 179763 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.482 179763 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.482 179763 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.483 179763 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.483 179763 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.483 179763 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.483 179763 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.483 179763 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.483 179763 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.484 179763 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.484 179763 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.484 179763 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.484 179763 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.484 179763 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.484 179763 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.484 179763 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.485 179763 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.485 179763 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.485 179763 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.485 179763 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.485 179763 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.485 179763 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.485 179763 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.486 179763 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.486 179763 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.486 179763 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.486 179763 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.486 179763 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.486 179763 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.486 179763 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.487 179763 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.487 179763 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.487 179763 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.487 179763 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.487 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.488 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.488 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.488 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.488 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.488 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.488 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.489 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.489 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.489 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.489 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.489 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.489 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.490 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.490 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.490 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.490 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.490 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.490 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.491 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.491 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.491 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.491 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.491 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.491 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.492 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.492 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.492 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.492 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.492 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.493 179763 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.493 179763 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.493 179763 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.493 179763 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.493 179763 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:35:24 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:35:24.494 179763 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 24 13:35:25 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:27 np0005533938 systemd-logind[822]: New session 51 of user zuul.
Nov 24 13:35:27 np0005533938 systemd[1]: Started Session 51 of User zuul.
Nov 24 13:35:27 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:35:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:35:27 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:35:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:35:27 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:35:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:35:27 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:35:27 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev bb65d5c7-2907-4727-8532-bf05ec685320 does not exist
Nov 24 13:35:27 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 305b82d0-da00-4420-b15a-d7ac31f796cc does not exist
Nov 24 13:35:27 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev fa0d32c2-47d2-481f-8664-e6d29df4c5df does not exist
Nov 24 13:35:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:35:27 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:35:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:35:27 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:35:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:35:27 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:35:28 np0005533938 python3.9[180162]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:35:28 np0005533938 podman[180312]: 2025-11-24 18:35:28.324109131 +0000 UTC m=+0.041787762 container create 93172b563331195361c7ab29fcea129d7e7216cff14d8f54b48d071072f6daae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cartwright, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 13:35:28 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:35:28 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:35:28 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:35:28 np0005533938 systemd[1]: Started libpod-conmon-93172b563331195361c7ab29fcea129d7e7216cff14d8f54b48d071072f6daae.scope.
Nov 24 13:35:28 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:35:28 np0005533938 podman[180312]: 2025-11-24 18:35:28.394876828 +0000 UTC m=+0.112555459 container init 93172b563331195361c7ab29fcea129d7e7216cff14d8f54b48d071072f6daae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 24 13:35:28 np0005533938 podman[180312]: 2025-11-24 18:35:28.304461726 +0000 UTC m=+0.022140377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:35:28 np0005533938 podman[180312]: 2025-11-24 18:35:28.402425504 +0000 UTC m=+0.120104135 container start 93172b563331195361c7ab29fcea129d7e7216cff14d8f54b48d071072f6daae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cartwright, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:35:28 np0005533938 interesting_cartwright[180348]: 167 167
Nov 24 13:35:28 np0005533938 podman[180312]: 2025-11-24 18:35:28.407205912 +0000 UTC m=+0.124884533 container attach 93172b563331195361c7ab29fcea129d7e7216cff14d8f54b48d071072f6daae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cartwright, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 13:35:28 np0005533938 systemd[1]: libpod-93172b563331195361c7ab29fcea129d7e7216cff14d8f54b48d071072f6daae.scope: Deactivated successfully.
Nov 24 13:35:28 np0005533938 podman[180312]: 2025-11-24 18:35:28.407732015 +0000 UTC m=+0.125410636 container died 93172b563331195361c7ab29fcea129d7e7216cff14d8f54b48d071072f6daae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:35:28 np0005533938 systemd[1]: var-lib-containers-storage-overlay-5b56bacb404663eac72ce302dadcd18fe93d08a5a28d8766d0c0c54202eda236-merged.mount: Deactivated successfully.
Nov 24 13:35:28 np0005533938 podman[180312]: 2025-11-24 18:35:28.446304727 +0000 UTC m=+0.163983358 container remove 93172b563331195361c7ab29fcea129d7e7216cff14d8f54b48d071072f6daae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cartwright, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 13:35:28 np0005533938 systemd[1]: libpod-conmon-93172b563331195361c7ab29fcea129d7e7216cff14d8f54b48d071072f6daae.scope: Deactivated successfully.
Nov 24 13:35:28 np0005533938 podman[180395]: 2025-11-24 18:35:28.631304712 +0000 UTC m=+0.046610041 container create 655951ff7d544590983275eac954372af3e2b465b33bcdc9c186a1279a0e6cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 13:35:28 np0005533938 systemd[1]: Started libpod-conmon-655951ff7d544590983275eac954372af3e2b465b33bcdc9c186a1279a0e6cdf.scope.
Nov 24 13:35:28 np0005533938 podman[180395]: 2025-11-24 18:35:28.610193751 +0000 UTC m=+0.025499100 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:35:28 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:35:28 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4255c04d1af4709861b101b749bb78d02e9a6669024a0723231b49549b2b8ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:35:28 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4255c04d1af4709861b101b749bb78d02e9a6669024a0723231b49549b2b8ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:35:28 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4255c04d1af4709861b101b749bb78d02e9a6669024a0723231b49549b2b8ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:35:28 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4255c04d1af4709861b101b749bb78d02e9a6669024a0723231b49549b2b8ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:35:28 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4255c04d1af4709861b101b749bb78d02e9a6669024a0723231b49549b2b8ca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:35:28 np0005533938 podman[180395]: 2025-11-24 18:35:28.734043317 +0000 UTC m=+0.149348666 container init 655951ff7d544590983275eac954372af3e2b465b33bcdc9c186a1279a0e6cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 13:35:28 np0005533938 podman[180395]: 2025-11-24 18:35:28.746985457 +0000 UTC m=+0.162290826 container start 655951ff7d544590983275eac954372af3e2b465b33bcdc9c186a1279a0e6cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 13:35:28 np0005533938 podman[180395]: 2025-11-24 18:35:28.751410796 +0000 UTC m=+0.166716125 container attach 655951ff7d544590983275eac954372af3e2b465b33bcdc9c186a1279a0e6cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 13:35:29 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:29 np0005533938 python3.9[180520]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:35:29 np0005533938 vigorous_nightingale[180440]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:35:29 np0005533938 vigorous_nightingale[180440]: --> relative data size: 1.0
Nov 24 13:35:29 np0005533938 vigorous_nightingale[180440]: --> All data devices are unavailable
Nov 24 13:35:29 np0005533938 systemd[1]: libpod-655951ff7d544590983275eac954372af3e2b465b33bcdc9c186a1279a0e6cdf.scope: Deactivated successfully.
Nov 24 13:35:29 np0005533938 podman[180395]: 2025-11-24 18:35:29.827272865 +0000 UTC m=+1.242578214 container died 655951ff7d544590983275eac954372af3e2b465b33bcdc9c186a1279a0e6cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:35:29 np0005533938 systemd[1]: var-lib-containers-storage-overlay-c4255c04d1af4709861b101b749bb78d02e9a6669024a0723231b49549b2b8ca-merged.mount: Deactivated successfully.
Nov 24 13:35:29 np0005533938 podman[180395]: 2025-11-24 18:35:29.88424323 +0000 UTC m=+1.299548559 container remove 655951ff7d544590983275eac954372af3e2b465b33bcdc9c186a1279a0e6cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 13:35:29 np0005533938 systemd[1]: libpod-conmon-655951ff7d544590983275eac954372af3e2b465b33bcdc9c186a1279a0e6cdf.scope: Deactivated successfully.
Nov 24 13:35:30 np0005533938 podman[180834]: 2025-11-24 18:35:30.435121455 +0000 UTC m=+0.035935248 container create 92ffe8c19ada179d8f3246f42fbe90b952b8428ef623af2784d483b283828cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:35:30 np0005533938 systemd[1]: Started libpod-conmon-92ffe8c19ada179d8f3246f42fbe90b952b8428ef623af2784d483b283828cfc.scope.
Nov 24 13:35:30 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:35:30 np0005533938 podman[180834]: 2025-11-24 18:35:30.514475783 +0000 UTC m=+0.115289596 container init 92ffe8c19ada179d8f3246f42fbe90b952b8428ef623af2784d483b283828cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 13:35:30 np0005533938 podman[180834]: 2025-11-24 18:35:30.420718679 +0000 UTC m=+0.021532492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:35:30 np0005533938 podman[180834]: 2025-11-24 18:35:30.526485799 +0000 UTC m=+0.127299592 container start 92ffe8c19ada179d8f3246f42fbe90b952b8428ef623af2784d483b283828cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 24 13:35:30 np0005533938 gallant_chaum[180879]: 167 167
Nov 24 13:35:30 np0005533938 systemd[1]: libpod-92ffe8c19ada179d8f3246f42fbe90b952b8428ef623af2784d483b283828cfc.scope: Deactivated successfully.
Nov 24 13:35:30 np0005533938 podman[180834]: 2025-11-24 18:35:30.533073522 +0000 UTC m=+0.133887315 container attach 92ffe8c19ada179d8f3246f42fbe90b952b8428ef623af2784d483b283828cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 13:35:30 np0005533938 podman[180834]: 2025-11-24 18:35:30.533738068 +0000 UTC m=+0.134551871 container died 92ffe8c19ada179d8f3246f42fbe90b952b8428ef623af2784d483b283828cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 13:35:30 np0005533938 systemd[1]: var-lib-containers-storage-overlay-cf246aaccd6f7f625919c9d20f59b9751ee51871b419d8498f3033342aeb0250-merged.mount: Deactivated successfully.
Nov 24 13:35:30 np0005533938 podman[180834]: 2025-11-24 18:35:30.571594233 +0000 UTC m=+0.172408036 container remove 92ffe8c19ada179d8f3246f42fbe90b952b8428ef623af2784d483b283828cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:35:30 np0005533938 systemd[1]: libpod-conmon-92ffe8c19ada179d8f3246f42fbe90b952b8428ef623af2784d483b283828cfc.scope: Deactivated successfully.
Nov 24 13:35:30 np0005533938 podman[180904]: 2025-11-24 18:35:30.71532897 +0000 UTC m=+0.037223660 container create 30336203ebeacd924e35bbdc1377bb75706d6bbe7b504a2c9930567b4de2b3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_roentgen, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:35:30 np0005533938 systemd[1]: Started libpod-conmon-30336203ebeacd924e35bbdc1377bb75706d6bbe7b504a2c9930567b4de2b3d0.scope.
Nov 24 13:35:30 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:35:30 np0005533938 python3.9[180881]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 13:35:30 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80d900be46907f3a22811acc2e88d6034065ac495e4f56d6db6655b920520e17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:35:30 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80d900be46907f3a22811acc2e88d6034065ac495e4f56d6db6655b920520e17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:35:30 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80d900be46907f3a22811acc2e88d6034065ac495e4f56d6db6655b920520e17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:35:30 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80d900be46907f3a22811acc2e88d6034065ac495e4f56d6db6655b920520e17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:35:30 np0005533938 systemd[1]: Reloading.
Nov 24 13:35:30 np0005533938 podman[180904]: 2025-11-24 18:35:30.797840496 +0000 UTC m=+0.119735196 container init 30336203ebeacd924e35bbdc1377bb75706d6bbe7b504a2c9930567b4de2b3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_roentgen, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:35:30 np0005533938 podman[180904]: 2025-11-24 18:35:30.699798736 +0000 UTC m=+0.021693456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:35:30 np0005533938 podman[180904]: 2025-11-24 18:35:30.809015501 +0000 UTC m=+0.130910191 container start 30336203ebeacd924e35bbdc1377bb75706d6bbe7b504a2c9930567b4de2b3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_roentgen, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 13:35:30 np0005533938 podman[180904]: 2025-11-24 18:35:30.812088907 +0000 UTC m=+0.133983617 container attach 30336203ebeacd924e35bbdc1377bb75706d6bbe7b504a2c9930567b4de2b3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_roentgen, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 13:35:30 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:35:30 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:35:31 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]: {
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:    "0": [
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:        {
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "devices": [
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "/dev/loop3"
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            ],
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "lv_name": "ceph_lv0",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "lv_size": "21470642176",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "name": "ceph_lv0",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "tags": {
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.cluster_name": "ceph",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.crush_device_class": "",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.encrypted": "0",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.osd_id": "0",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.type": "block",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.vdo": "0"
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            },
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "type": "block",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "vg_name": "ceph_vg0"
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:        }
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:    ],
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:    "1": [
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:        {
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "devices": [
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "/dev/loop4"
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            ],
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "lv_name": "ceph_lv1",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "lv_size": "21470642176",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "name": "ceph_lv1",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "tags": {
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.cluster_name": "ceph",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.crush_device_class": "",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.encrypted": "0",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.osd_id": "1",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.type": "block",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.vdo": "0"
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            },
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "type": "block",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "vg_name": "ceph_vg1"
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:        }
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:    ],
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:    "2": [
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:        {
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "devices": [
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "/dev/loop5"
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            ],
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "lv_name": "ceph_lv2",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "lv_size": "21470642176",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "name": "ceph_lv2",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "tags": {
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.cluster_name": "ceph",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.crush_device_class": "",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.encrypted": "0",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.osd_id": "2",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.type": "block",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:                "ceph.vdo": "0"
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            },
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "type": "block",
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:            "vg_name": "ceph_vg2"
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:        }
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]:    ]
Nov 24 13:35:31 np0005533938 frosty_roentgen[180921]: }
Nov 24 13:35:31 np0005533938 systemd[1]: libpod-30336203ebeacd924e35bbdc1377bb75706d6bbe7b504a2c9930567b4de2b3d0.scope: Deactivated successfully.
Nov 24 13:35:31 np0005533938 podman[180904]: 2025-11-24 18:35:31.550239653 +0000 UTC m=+0.872134373 container died 30336203ebeacd924e35bbdc1377bb75706d6bbe7b504a2c9930567b4de2b3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_roentgen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:35:31 np0005533938 systemd[1]: var-lib-containers-storage-overlay-80d900be46907f3a22811acc2e88d6034065ac495e4f56d6db6655b920520e17-merged.mount: Deactivated successfully.
Nov 24 13:35:31 np0005533938 podman[180904]: 2025-11-24 18:35:31.617540094 +0000 UTC m=+0.939434784 container remove 30336203ebeacd924e35bbdc1377bb75706d6bbe7b504a2c9930567b4de2b3d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 13:35:31 np0005533938 systemd[1]: libpod-conmon-30336203ebeacd924e35bbdc1377bb75706d6bbe7b504a2c9930567b4de2b3d0.scope: Deactivated successfully.
Nov 24 13:35:31 np0005533938 python3.9[181148]: ansible-ansible.builtin.service_facts Invoked
Nov 24 13:35:31 np0005533938 network[181242]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 13:35:31 np0005533938 network[181243]: 'network-scripts' will be removed from distribution in near future.
Nov 24 13:35:31 np0005533938 network[181244]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 13:35:32 np0005533938 podman[181291]: 2025-11-24 18:35:32.270527378 +0000 UTC m=+0.042862379 container create 20f6ce94f95e1d403e131a8ada6b85e3d51c39291a06f8a466f3a9c1fd1fb0d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:35:32 np0005533938 podman[181291]: 2025-11-24 18:35:32.248783251 +0000 UTC m=+0.021118302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:35:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:35:32 np0005533938 systemd[1]: Started libpod-conmon-20f6ce94f95e1d403e131a8ada6b85e3d51c39291a06f8a466f3a9c1fd1fb0d1.scope.
Nov 24 13:35:32 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:35:32 np0005533938 podman[181291]: 2025-11-24 18:35:32.734475417 +0000 UTC m=+0.506810438 container init 20f6ce94f95e1d403e131a8ada6b85e3d51c39291a06f8a466f3a9c1fd1fb0d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_meninsky, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 13:35:32 np0005533938 podman[181291]: 2025-11-24 18:35:32.748350379 +0000 UTC m=+0.520685410 container start 20f6ce94f95e1d403e131a8ada6b85e3d51c39291a06f8a466f3a9c1fd1fb0d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:35:32 np0005533938 podman[181291]: 2025-11-24 18:35:32.75244744 +0000 UTC m=+0.524782461 container attach 20f6ce94f95e1d403e131a8ada6b85e3d51c39291a06f8a466f3a9c1fd1fb0d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:35:32 np0005533938 systemd[1]: libpod-20f6ce94f95e1d403e131a8ada6b85e3d51c39291a06f8a466f3a9c1fd1fb0d1.scope: Deactivated successfully.
Nov 24 13:35:32 np0005533938 inspiring_meninsky[181308]: 167 167
Nov 24 13:35:32 np0005533938 conmon[181308]: conmon 20f6ce94f95e1d403e13 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-20f6ce94f95e1d403e131a8ada6b85e3d51c39291a06f8a466f3a9c1fd1fb0d1.scope/container/memory.events
Nov 24 13:35:32 np0005533938 podman[181291]: 2025-11-24 18:35:32.753735662 +0000 UTC m=+0.526070663 container died 20f6ce94f95e1d403e131a8ada6b85e3d51c39291a06f8a466f3a9c1fd1fb0d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_meninsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:35:32 np0005533938 systemd[1]: var-lib-containers-storage-overlay-6acfa450286cb5df5123856feb8280db0e8998697f450ce8da84368e08d46ff0-merged.mount: Deactivated successfully.
Nov 24 13:35:32 np0005533938 podman[181291]: 2025-11-24 18:35:32.792400446 +0000 UTC m=+0.564735447 container remove 20f6ce94f95e1d403e131a8ada6b85e3d51c39291a06f8a466f3a9c1fd1fb0d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:35:32 np0005533938 systemd[1]: libpod-conmon-20f6ce94f95e1d403e131a8ada6b85e3d51c39291a06f8a466f3a9c1fd1fb0d1.scope: Deactivated successfully.
Nov 24 13:35:32 np0005533938 podman[181343]: 2025-11-24 18:35:32.98022427 +0000 UTC m=+0.049059505 container create d877fa2c54687a5fa31738f15b0fd92de1aee29c9b0c3dc78444f1ed28486f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sinoussi, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:35:33 np0005533938 systemd[1]: Started libpod-conmon-d877fa2c54687a5fa31738f15b0fd92de1aee29c9b0c3dc78444f1ed28486f0a.scope.
Nov 24 13:35:33 np0005533938 podman[181343]: 2025-11-24 18:35:32.958523297 +0000 UTC m=+0.027358572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:35:33 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:35:33 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc90ef8e381f3c8a3a7ea7b6a186e71c68adf4f904921723bb1abced4d3a9f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:35:33 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc90ef8e381f3c8a3a7ea7b6a186e71c68adf4f904921723bb1abced4d3a9f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:35:33 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc90ef8e381f3c8a3a7ea7b6a186e71c68adf4f904921723bb1abced4d3a9f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:35:33 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc90ef8e381f3c8a3a7ea7b6a186e71c68adf4f904921723bb1abced4d3a9f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:35:33 np0005533938 podman[181343]: 2025-11-24 18:35:33.070642739 +0000 UTC m=+0.139477984 container init d877fa2c54687a5fa31738f15b0fd92de1aee29c9b0c3dc78444f1ed28486f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:35:33 np0005533938 podman[181343]: 2025-11-24 18:35:33.077162649 +0000 UTC m=+0.145997884 container start d877fa2c54687a5fa31738f15b0fd92de1aee29c9b0c3dc78444f1ed28486f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sinoussi, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:35:33 np0005533938 podman[181343]: 2025-11-24 18:35:33.08003773 +0000 UTC m=+0.148872965 container attach d877fa2c54687a5fa31738f15b0fd92de1aee29c9b0c3dc78444f1ed28486f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sinoussi, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 24 13:35:33 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:34 np0005533938 beautiful_sinoussi[181365]: {
Nov 24 13:35:34 np0005533938 beautiful_sinoussi[181365]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:35:34 np0005533938 beautiful_sinoussi[181365]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:35:34 np0005533938 beautiful_sinoussi[181365]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:35:34 np0005533938 beautiful_sinoussi[181365]:        "osd_id": 0,
Nov 24 13:35:34 np0005533938 beautiful_sinoussi[181365]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:35:34 np0005533938 beautiful_sinoussi[181365]:        "type": "bluestore"
Nov 24 13:35:34 np0005533938 beautiful_sinoussi[181365]:    },
Nov 24 13:35:34 np0005533938 beautiful_sinoussi[181365]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:35:34 np0005533938 beautiful_sinoussi[181365]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:35:34 np0005533938 beautiful_sinoussi[181365]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:35:34 np0005533938 beautiful_sinoussi[181365]:        "osd_id": 1,
Nov 24 13:35:34 np0005533938 beautiful_sinoussi[181365]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:35:34 np0005533938 beautiful_sinoussi[181365]:        "type": "bluestore"
Nov 24 13:35:34 np0005533938 beautiful_sinoussi[181365]:    },
Nov 24 13:35:34 np0005533938 beautiful_sinoussi[181365]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:35:34 np0005533938 beautiful_sinoussi[181365]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:35:34 np0005533938 beautiful_sinoussi[181365]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:35:34 np0005533938 beautiful_sinoussi[181365]:        "osd_id": 2,
Nov 24 13:35:34 np0005533938 beautiful_sinoussi[181365]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:35:34 np0005533938 beautiful_sinoussi[181365]:        "type": "bluestore"
Nov 24 13:35:34 np0005533938 beautiful_sinoussi[181365]:    }
Nov 24 13:35:34 np0005533938 beautiful_sinoussi[181365]: }
Nov 24 13:35:34 np0005533938 systemd[1]: libpod-d877fa2c54687a5fa31738f15b0fd92de1aee29c9b0c3dc78444f1ed28486f0a.scope: Deactivated successfully.
Nov 24 13:35:34 np0005533938 podman[181343]: 2025-11-24 18:35:34.067099759 +0000 UTC m=+1.135935004 container died d877fa2c54687a5fa31738f15b0fd92de1aee29c9b0c3dc78444f1ed28486f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:35:34 np0005533938 systemd[1]: var-lib-containers-storage-overlay-2bc90ef8e381f3c8a3a7ea7b6a186e71c68adf4f904921723bb1abced4d3a9f4-merged.mount: Deactivated successfully.
Nov 24 13:35:34 np0005533938 podman[181343]: 2025-11-24 18:35:34.136513713 +0000 UTC m=+1.205348948 container remove d877fa2c54687a5fa31738f15b0fd92de1aee29c9b0c3dc78444f1ed28486f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:35:34 np0005533938 systemd[1]: libpod-conmon-d877fa2c54687a5fa31738f15b0fd92de1aee29c9b0c3dc78444f1ed28486f0a.scope: Deactivated successfully.
Nov 24 13:35:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:35:34 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:35:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:35:34 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:35:34 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 0fe8ff92-d150-41d6-934c-aeb086d54638 does not exist
Nov 24 13:35:34 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev dbce541f-d731-4e67-95c6-6fe7f725ca4b does not exist
Nov 24 13:35:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:35:34
Nov 24 13:35:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:35:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:35:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['vms', 'images', 'cephfs.cephfs.meta', 'volumes', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'default.rgw.control']
Nov 24 13:35:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:35:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:35:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:35:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:35:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:35:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:35:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:35:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:35:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:35:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:35:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:35:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:35:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:35:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:35:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:35:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:35:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:35:35 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:35:35 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:35:35 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:36 np0005533938 python3.9[181698]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:35:37 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:35:37 np0005533938 python3.9[181851]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:35:38 np0005533938 python3.9[182004]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:35:39 np0005533938 python3.9[182157]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:35:39 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:40 np0005533938 python3.9[182310]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:35:40 np0005533938 python3.9[182463]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:35:41 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:41 np0005533938 python3.9[182616]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:35:42 np0005533938 python3.9[182769]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:35:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:35:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:35:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:35:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:35:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:35:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:35:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:35:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:35:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:35:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:35:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:35:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:35:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:35:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:35:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:35:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:35:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:35:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:35:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:35:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:35:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:35:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:35:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:35:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:35:43 np0005533938 python3.9[182921]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:35:43 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:43 np0005533938 python3.9[183073]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:35:44 np0005533938 python3.9[183225]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:35:45 np0005533938 python3.9[183377]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:35:45 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:45 np0005533938 python3.9[183529]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:35:46 np0005533938 podman[183653]: 2025-11-24 18:35:46.265703353 +0000 UTC m=+0.102447556 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true)
Nov 24 13:35:46 np0005533938 python3.9[183701]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:35:47 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:47 np0005533938 python3.9[183859]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:35:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:35:47 np0005533938 python3.9[184011]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:35:48 np0005533938 python3.9[184163]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:35:49 np0005533938 python3.9[184315]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:35:49 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:49 np0005533938 python3.9[184467]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:35:50 np0005533938 python3.9[184619]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:35:50 np0005533938 podman[184743]: 2025-11-24 18:35:50.817665367 +0000 UTC m=+0.060273671 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 24 13:35:51 np0005533938 python3.9[184788]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:35:51 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:51 np0005533938 python3.9[184941]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:35:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:35:52 np0005533938 python3.9[185093]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 13:35:53 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:53 np0005533938 python3.9[185245]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 13:35:53 np0005533938 systemd[1]: Reloading.
Nov 24 13:35:53 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:35:53 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:35:54 np0005533938 python3.9[185432]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:35:55 np0005533938 python3.9[185585]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:35:55 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:56 np0005533938 python3.9[185738]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:35:56 np0005533938 python3.9[185891]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:35:57 np0005533938 python3.9[186044]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:35:57 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:35:58 np0005533938 python3.9[186197]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:35:58 np0005533938 python3.9[186350]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:35:59 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:35:59 np0005533938 python3.9[186503]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 24 13:36:00 np0005533938 python3.9[186656]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 13:36:01 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:01 np0005533938 python3.9[186814]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 13:36:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:36:02 np0005533938 python3.9[186974]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 13:36:03 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:03 np0005533938 python3.9[187058]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:36:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:36:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:36:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:36:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:36:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:36:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:36:05 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:07 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:36:09 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:11 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:36:13 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:15 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:16 np0005533938 podman[187109]: 2025-11-24 18:36:16.997385258 +0000 UTC m=+0.092135913 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 13:36:17 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:36:19 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:20 np0005533938 podman[187134]: 2025-11-24 18:36:20.958638022 +0000 UTC m=+0.054310084 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 13:36:21 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:36:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:36:22.727 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:36:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:36:22.728 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:36:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:36:22.728 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:36:23 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:25 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:27 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:36:29 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:31 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:36:33 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:36:34
Nov 24 13:36:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:36:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:36:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'backups', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'images', 'volumes', 'default.rgw.control', 'default.rgw.meta']
Nov 24 13:36:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:36:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:36:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:36:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:36:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:36:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:36:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:36:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:36:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:36:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:36:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:36:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:36:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:36:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:36:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:36:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:36:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:36:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:36:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:36:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:36:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:36:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:36:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:36:35 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev cf065138-3ddc-4124-b90f-207e37a89ef9 does not exist
Nov 24 13:36:35 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 73580fb4-e01c-476c-b099-beaa02f1c5ec does not exist
Nov 24 13:36:35 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 6e1cde58-8e1c-465a-a1df-7a720aa8ca06 does not exist
Nov 24 13:36:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:36:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:36:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:36:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:36:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:36:35 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:36:35 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:35 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:36:35 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:36:35 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:36:35 np0005533938 podman[187598]: 2025-11-24 18:36:35.738085086 +0000 UTC m=+0.047942308 container create d596b78641045a72355f215b49e2058492c80e189467f33e97ae594d9363e002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poincare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 13:36:35 np0005533938 systemd[1]: Started libpod-conmon-d596b78641045a72355f215b49e2058492c80e189467f33e97ae594d9363e002.scope.
Nov 24 13:36:35 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:36:35 np0005533938 podman[187598]: 2025-11-24 18:36:35.709757611 +0000 UTC m=+0.019614843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:36:35 np0005533938 podman[187598]: 2025-11-24 18:36:35.813502898 +0000 UTC m=+0.123360110 container init d596b78641045a72355f215b49e2058492c80e189467f33e97ae594d9363e002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poincare, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:36:35 np0005533938 podman[187598]: 2025-11-24 18:36:35.821864553 +0000 UTC m=+0.131721765 container start d596b78641045a72355f215b49e2058492c80e189467f33e97ae594d9363e002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poincare, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 13:36:35 np0005533938 podman[187598]: 2025-11-24 18:36:35.826208339 +0000 UTC m=+0.136065551 container attach d596b78641045a72355f215b49e2058492c80e189467f33e97ae594d9363e002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poincare, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 13:36:35 np0005533938 elated_poincare[187614]: 167 167
Nov 24 13:36:35 np0005533938 systemd[1]: libpod-d596b78641045a72355f215b49e2058492c80e189467f33e97ae594d9363e002.scope: Deactivated successfully.
Nov 24 13:36:35 np0005533938 conmon[187614]: conmon d596b78641045a72355f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d596b78641045a72355f215b49e2058492c80e189467f33e97ae594d9363e002.scope/container/memory.events
Nov 24 13:36:35 np0005533938 podman[187598]: 2025-11-24 18:36:35.83846374 +0000 UTC m=+0.148320972 container died d596b78641045a72355f215b49e2058492c80e189467f33e97ae594d9363e002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 13:36:35 np0005533938 systemd[1]: var-lib-containers-storage-overlay-3f3fe67bb07375c7ac942b4e16f431476c5de4f2f1e0add50aa3545380b1159b-merged.mount: Deactivated successfully.
Nov 24 13:36:35 np0005533938 podman[187598]: 2025-11-24 18:36:35.885632598 +0000 UTC m=+0.195489820 container remove d596b78641045a72355f215b49e2058492c80e189467f33e97ae594d9363e002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:36:35 np0005533938 systemd[1]: libpod-conmon-d596b78641045a72355f215b49e2058492c80e189467f33e97ae594d9363e002.scope: Deactivated successfully.
Nov 24 13:36:36 np0005533938 podman[187637]: 2025-11-24 18:36:36.034695487 +0000 UTC m=+0.039133241 container create 3b9bfae0d071364546f4c3518c3ee417c8bd92668d176b09df5e83461c4fc936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 13:36:36 np0005533938 systemd[1]: Started libpod-conmon-3b9bfae0d071364546f4c3518c3ee417c8bd92668d176b09df5e83461c4fc936.scope.
Nov 24 13:36:36 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:36:36 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf01e8656f307c951776757ddbb5b27d3b1b0602a3bdac6ae7d183ec60dbbde/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:36:36 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf01e8656f307c951776757ddbb5b27d3b1b0602a3bdac6ae7d183ec60dbbde/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:36:36 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf01e8656f307c951776757ddbb5b27d3b1b0602a3bdac6ae7d183ec60dbbde/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:36:36 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf01e8656f307c951776757ddbb5b27d3b1b0602a3bdac6ae7d183ec60dbbde/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:36:36 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf01e8656f307c951776757ddbb5b27d3b1b0602a3bdac6ae7d183ec60dbbde/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:36:36 np0005533938 podman[187637]: 2025-11-24 18:36:36.016684685 +0000 UTC m=+0.021122459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:36:36 np0005533938 podman[187637]: 2025-11-24 18:36:36.117619723 +0000 UTC m=+0.122057527 container init 3b9bfae0d071364546f4c3518c3ee417c8bd92668d176b09df5e83461c4fc936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 13:36:36 np0005533938 podman[187637]: 2025-11-24 18:36:36.12892901 +0000 UTC m=+0.133366764 container start 3b9bfae0d071364546f4c3518c3ee417c8bd92668d176b09df5e83461c4fc936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pike, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 13:36:36 np0005533938 podman[187637]: 2025-11-24 18:36:36.135952683 +0000 UTC m=+0.140390467 container attach 3b9bfae0d071364546f4c3518c3ee417c8bd92668d176b09df5e83461c4fc936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:36:37 np0005533938 awesome_pike[187654]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:36:37 np0005533938 awesome_pike[187654]: --> relative data size: 1.0
Nov 24 13:36:37 np0005533938 awesome_pike[187654]: --> All data devices are unavailable
Nov 24 13:36:37 np0005533938 systemd[1]: libpod-3b9bfae0d071364546f4c3518c3ee417c8bd92668d176b09df5e83461c4fc936.scope: Deactivated successfully.
Nov 24 13:36:37 np0005533938 systemd[1]: libpod-3b9bfae0d071364546f4c3518c3ee417c8bd92668d176b09df5e83461c4fc936.scope: Consumed 1.026s CPU time.
Nov 24 13:36:37 np0005533938 podman[187688]: 2025-11-24 18:36:37.29001751 +0000 UTC m=+0.031447503 container died 3b9bfae0d071364546f4c3518c3ee417c8bd92668d176b09df5e83461c4fc936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:36:37 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:37 np0005533938 systemd[1]: var-lib-containers-storage-overlay-9bf01e8656f307c951776757ddbb5b27d3b1b0602a3bdac6ae7d183ec60dbbde-merged.mount: Deactivated successfully.
Nov 24 13:36:37 np0005533938 podman[187688]: 2025-11-24 18:36:37.354465942 +0000 UTC m=+0.095895915 container remove 3b9bfae0d071364546f4c3518c3ee417c8bd92668d176b09df5e83461c4fc936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pike, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:36:37 np0005533938 systemd[1]: libpod-conmon-3b9bfae0d071364546f4c3518c3ee417c8bd92668d176b09df5e83461c4fc936.scope: Deactivated successfully.
Nov 24 13:36:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:36:38 np0005533938 podman[187843]: 2025-11-24 18:36:38.052882396 +0000 UTC m=+0.066918793 container create 52468178b66d1e6996fa8dc7172b77161c094fba63633b39b6cdeaa1b707b0d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:36:38 np0005533938 systemd[1]: Started libpod-conmon-52468178b66d1e6996fa8dc7172b77161c094fba63633b39b6cdeaa1b707b0d3.scope.
Nov 24 13:36:38 np0005533938 podman[187843]: 2025-11-24 18:36:38.011270415 +0000 UTC m=+0.025306862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:36:38 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:36:38 np0005533938 podman[187843]: 2025-11-24 18:36:38.13897196 +0000 UTC m=+0.153008347 container init 52468178b66d1e6996fa8dc7172b77161c094fba63633b39b6cdeaa1b707b0d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 13:36:38 np0005533938 podman[187843]: 2025-11-24 18:36:38.150837171 +0000 UTC m=+0.164873568 container start 52468178b66d1e6996fa8dc7172b77161c094fba63633b39b6cdeaa1b707b0d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_thompson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:36:38 np0005533938 podman[187843]: 2025-11-24 18:36:38.153961227 +0000 UTC m=+0.167997614 container attach 52468178b66d1e6996fa8dc7172b77161c094fba63633b39b6cdeaa1b707b0d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 13:36:38 np0005533938 magical_thompson[187860]: 167 167
Nov 24 13:36:38 np0005533938 systemd[1]: libpod-52468178b66d1e6996fa8dc7172b77161c094fba63633b39b6cdeaa1b707b0d3.scope: Deactivated successfully.
Nov 24 13:36:38 np0005533938 podman[187843]: 2025-11-24 18:36:38.157214217 +0000 UTC m=+0.171250684 container died 52468178b66d1e6996fa8dc7172b77161c094fba63633b39b6cdeaa1b707b0d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_thompson, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 13:36:38 np0005533938 systemd[1]: var-lib-containers-storage-overlay-889f06631066b0720c656135a2d9a27efa08ff14344f7e5a093904c4c88d4565-merged.mount: Deactivated successfully.
Nov 24 13:36:38 np0005533938 podman[187843]: 2025-11-24 18:36:38.222154991 +0000 UTC m=+0.236191388 container remove 52468178b66d1e6996fa8dc7172b77161c094fba63633b39b6cdeaa1b707b0d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_thompson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:36:38 np0005533938 systemd[1]: libpod-conmon-52468178b66d1e6996fa8dc7172b77161c094fba63633b39b6cdeaa1b707b0d3.scope: Deactivated successfully.
Nov 24 13:36:38 np0005533938 podman[187886]: 2025-11-24 18:36:38.391748394 +0000 UTC m=+0.047833625 container create bfa0df0a3d82f92080c98f18167eaf4dc93691075aa035ec636e2dd63b7bc607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_margulis, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 13:36:38 np0005533938 systemd[1]: Started libpod-conmon-bfa0df0a3d82f92080c98f18167eaf4dc93691075aa035ec636e2dd63b7bc607.scope.
Nov 24 13:36:38 np0005533938 podman[187886]: 2025-11-24 18:36:38.371487567 +0000 UTC m=+0.027572788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:36:38 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:36:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1d6dcb4565225c2cabab6c084171defcb863906c93889aebbd3ab31131007e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:36:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1d6dcb4565225c2cabab6c084171defcb863906c93889aebbd3ab31131007e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:36:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1d6dcb4565225c2cabab6c084171defcb863906c93889aebbd3ab31131007e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:36:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1d6dcb4565225c2cabab6c084171defcb863906c93889aebbd3ab31131007e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:36:38 np0005533938 podman[187886]: 2025-11-24 18:36:38.478821722 +0000 UTC m=+0.134906943 container init bfa0df0a3d82f92080c98f18167eaf4dc93691075aa035ec636e2dd63b7bc607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_margulis, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:36:38 np0005533938 podman[187886]: 2025-11-24 18:36:38.486358957 +0000 UTC m=+0.142444158 container start bfa0df0a3d82f92080c98f18167eaf4dc93691075aa035ec636e2dd63b7bc607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_margulis, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:36:38 np0005533938 podman[187886]: 2025-11-24 18:36:38.488760696 +0000 UTC m=+0.144845897 container attach bfa0df0a3d82f92080c98f18167eaf4dc93691075aa035ec636e2dd63b7bc607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_margulis, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]: {
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:    "0": [
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:        {
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "devices": [
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "/dev/loop3"
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            ],
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "lv_name": "ceph_lv0",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "lv_size": "21470642176",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "name": "ceph_lv0",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "tags": {
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.cluster_name": "ceph",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.crush_device_class": "",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.encrypted": "0",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.osd_id": "0",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.type": "block",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.vdo": "0"
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            },
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "type": "block",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "vg_name": "ceph_vg0"
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:        }
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:    ],
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:    "1": [
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:        {
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "devices": [
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "/dev/loop4"
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            ],
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "lv_name": "ceph_lv1",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "lv_size": "21470642176",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "name": "ceph_lv1",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "tags": {
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.cluster_name": "ceph",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.crush_device_class": "",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.encrypted": "0",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.osd_id": "1",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.type": "block",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.vdo": "0"
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            },
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "type": "block",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "vg_name": "ceph_vg1"
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:        }
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:    ],
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:    "2": [
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:        {
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "devices": [
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "/dev/loop5"
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            ],
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "lv_name": "ceph_lv2",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "lv_size": "21470642176",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "name": "ceph_lv2",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "tags": {
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.cluster_name": "ceph",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.crush_device_class": "",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.encrypted": "0",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.osd_id": "2",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.type": "block",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:                "ceph.vdo": "0"
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            },
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "type": "block",
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:            "vg_name": "ceph_vg2"
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:        }
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]:    ]
Nov 24 13:36:39 np0005533938 xenodochial_margulis[187903]: }
Nov 24 13:36:39 np0005533938 systemd[1]: libpod-bfa0df0a3d82f92080c98f18167eaf4dc93691075aa035ec636e2dd63b7bc607.scope: Deactivated successfully.
Nov 24 13:36:39 np0005533938 podman[187886]: 2025-11-24 18:36:39.277143798 +0000 UTC m=+0.933228999 container died bfa0df0a3d82f92080c98f18167eaf4dc93691075aa035ec636e2dd63b7bc607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_margulis, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 13:36:39 np0005533938 systemd[1]: var-lib-containers-storage-overlay-be1d6dcb4565225c2cabab6c084171defcb863906c93889aebbd3ab31131007e-merged.mount: Deactivated successfully.
Nov 24 13:36:39 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:39 np0005533938 podman[187886]: 2025-11-24 18:36:39.327775471 +0000 UTC m=+0.983860672 container remove bfa0df0a3d82f92080c98f18167eaf4dc93691075aa035ec636e2dd63b7bc607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:36:39 np0005533938 systemd[1]: libpod-conmon-bfa0df0a3d82f92080c98f18167eaf4dc93691075aa035ec636e2dd63b7bc607.scope: Deactivated successfully.
Nov 24 13:36:39 np0005533938 podman[188062]: 2025-11-24 18:36:39.959840846 +0000 UTC m=+0.035607145 container create d27ec91250054a9e69fcd95cca10b3c1e53226cb25074bdc24db946115046775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:36:39 np0005533938 systemd[1]: Started libpod-conmon-d27ec91250054a9e69fcd95cca10b3c1e53226cb25074bdc24db946115046775.scope.
Nov 24 13:36:40 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:36:40 np0005533938 podman[188062]: 2025-11-24 18:36:40.041179833 +0000 UTC m=+0.116946182 container init d27ec91250054a9e69fcd95cca10b3c1e53226cb25074bdc24db946115046775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 13:36:40 np0005533938 podman[188062]: 2025-11-24 18:36:39.94492514 +0000 UTC m=+0.020691459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:36:40 np0005533938 podman[188062]: 2025-11-24 18:36:40.048835791 +0000 UTC m=+0.124602100 container start d27ec91250054a9e69fcd95cca10b3c1e53226cb25074bdc24db946115046775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bassi, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 13:36:40 np0005533938 podman[188062]: 2025-11-24 18:36:40.05167032 +0000 UTC m=+0.127436639 container attach d27ec91250054a9e69fcd95cca10b3c1e53226cb25074bdc24db946115046775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Nov 24 13:36:40 np0005533938 heuristic_bassi[188078]: 167 167
Nov 24 13:36:40 np0005533938 systemd[1]: libpod-d27ec91250054a9e69fcd95cca10b3c1e53226cb25074bdc24db946115046775.scope: Deactivated successfully.
Nov 24 13:36:40 np0005533938 podman[188062]: 2025-11-24 18:36:40.054339656 +0000 UTC m=+0.130105955 container died d27ec91250054a9e69fcd95cca10b3c1e53226cb25074bdc24db946115046775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bassi, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:36:40 np0005533938 systemd[1]: var-lib-containers-storage-overlay-7e95685bdb242075575ba975df4558fe04926f88146593ba06025bc54b9b65b9-merged.mount: Deactivated successfully.
Nov 24 13:36:40 np0005533938 podman[188062]: 2025-11-24 18:36:40.098400677 +0000 UTC m=+0.174167016 container remove d27ec91250054a9e69fcd95cca10b3c1e53226cb25074bdc24db946115046775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bassi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 13:36:40 np0005533938 systemd[1]: libpod-conmon-d27ec91250054a9e69fcd95cca10b3c1e53226cb25074bdc24db946115046775.scope: Deactivated successfully.
Nov 24 13:36:40 np0005533938 podman[188102]: 2025-11-24 18:36:40.263707805 +0000 UTC m=+0.040424313 container create b639d854128d2ab8f8650cc7a39bd18660bc1a88f18692209e73393ab525a1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_noether, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:36:40 np0005533938 systemd[1]: Started libpod-conmon-b639d854128d2ab8f8650cc7a39bd18660bc1a88f18692209e73393ab525a1c7.scope.
Nov 24 13:36:40 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:36:40 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da8cbf748fd95d33b92bb8a5234b89e02570293e7d2febd601d0ad474444d493/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:36:40 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da8cbf748fd95d33b92bb8a5234b89e02570293e7d2febd601d0ad474444d493/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:36:40 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da8cbf748fd95d33b92bb8a5234b89e02570293e7d2febd601d0ad474444d493/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:36:40 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da8cbf748fd95d33b92bb8a5234b89e02570293e7d2febd601d0ad474444d493/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:36:40 np0005533938 podman[188102]: 2025-11-24 18:36:40.24516083 +0000 UTC m=+0.021877328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:36:40 np0005533938 podman[188102]: 2025-11-24 18:36:40.350109126 +0000 UTC m=+0.126825634 container init b639d854128d2ab8f8650cc7a39bd18660bc1a88f18692209e73393ab525a1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 13:36:40 np0005533938 podman[188102]: 2025-11-24 18:36:40.358528473 +0000 UTC m=+0.135244951 container start b639d854128d2ab8f8650cc7a39bd18660bc1a88f18692209e73393ab525a1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_noether, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:36:40 np0005533938 podman[188102]: 2025-11-24 18:36:40.362225133 +0000 UTC m=+0.138941631 container attach b639d854128d2ab8f8650cc7a39bd18660bc1a88f18692209e73393ab525a1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_noether, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:36:41 np0005533938 vigorous_noether[188119]: {
Nov 24 13:36:41 np0005533938 vigorous_noether[188119]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:36:41 np0005533938 vigorous_noether[188119]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:36:41 np0005533938 vigorous_noether[188119]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:36:41 np0005533938 vigorous_noether[188119]:        "osd_id": 0,
Nov 24 13:36:41 np0005533938 vigorous_noether[188119]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:36:41 np0005533938 vigorous_noether[188119]:        "type": "bluestore"
Nov 24 13:36:41 np0005533938 vigorous_noether[188119]:    },
Nov 24 13:36:41 np0005533938 vigorous_noether[188119]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:36:41 np0005533938 vigorous_noether[188119]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:36:41 np0005533938 vigorous_noether[188119]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:36:41 np0005533938 vigorous_noether[188119]:        "osd_id": 1,
Nov 24 13:36:41 np0005533938 vigorous_noether[188119]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:36:41 np0005533938 vigorous_noether[188119]:        "type": "bluestore"
Nov 24 13:36:41 np0005533938 vigorous_noether[188119]:    },
Nov 24 13:36:41 np0005533938 vigorous_noether[188119]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:36:41 np0005533938 vigorous_noether[188119]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:36:41 np0005533938 vigorous_noether[188119]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:36:41 np0005533938 vigorous_noether[188119]:        "osd_id": 2,
Nov 24 13:36:41 np0005533938 vigorous_noether[188119]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:36:41 np0005533938 vigorous_noether[188119]:        "type": "bluestore"
Nov 24 13:36:41 np0005533938 vigorous_noether[188119]:    }
Nov 24 13:36:41 np0005533938 vigorous_noether[188119]: }
Nov 24 13:36:41 np0005533938 systemd[1]: libpod-b639d854128d2ab8f8650cc7a39bd18660bc1a88f18692209e73393ab525a1c7.scope: Deactivated successfully.
Nov 24 13:36:41 np0005533938 podman[188102]: 2025-11-24 18:36:41.284420649 +0000 UTC m=+1.061137127 container died b639d854128d2ab8f8650cc7a39bd18660bc1a88f18692209e73393ab525a1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:36:41 np0005533938 systemd[1]: var-lib-containers-storage-overlay-da8cbf748fd95d33b92bb8a5234b89e02570293e7d2febd601d0ad474444d493-merged.mount: Deactivated successfully.
Nov 24 13:36:41 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:41 np0005533938 podman[188102]: 2025-11-24 18:36:41.331947576 +0000 UTC m=+1.108664054 container remove b639d854128d2ab8f8650cc7a39bd18660bc1a88f18692209e73393ab525a1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 13:36:41 np0005533938 systemd[1]: libpod-conmon-b639d854128d2ab8f8650cc7a39bd18660bc1a88f18692209e73393ab525a1c7.scope: Deactivated successfully.
Nov 24 13:36:41 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:36:41 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:36:41 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:36:41 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:36:41 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev ffb89280-5c80-45c5-9232-23ab21c10fbf does not exist
Nov 24 13:36:41 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev e9acbac3-f6ec-402a-82a2-032f78e6036d does not exist
Nov 24 13:36:42 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:36:42 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:36:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:36:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:36:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:36:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:36:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:36:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:36:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:36:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:36:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:36:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:36:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:36:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:36:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:36:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:36:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:36:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:36:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:36:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:36:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:36:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:36:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:36:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:36:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:36:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:36:43 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:45 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:46 np0005533938 kernel: SELinux:  Converting 2769 SID table entries...
Nov 24 13:36:46 np0005533938 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 13:36:46 np0005533938 kernel: SELinux:  policy capability open_perms=1
Nov 24 13:36:46 np0005533938 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 13:36:46 np0005533938 kernel: SELinux:  policy capability always_check_network=0
Nov 24 13:36:46 np0005533938 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 13:36:46 np0005533938 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 13:36:46 np0005533938 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 13:36:47 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.624995) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009407625031, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1856, "num_deletes": 250, "total_data_size": 3130629, "memory_usage": 3178776, "flush_reason": "Manual Compaction"}
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009407635232, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1769378, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11763, "largest_seqno": 13618, "table_properties": {"data_size": 1763367, "index_size": 3022, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15012, "raw_average_key_size": 20, "raw_value_size": 1750105, "raw_average_value_size": 2342, "num_data_blocks": 141, "num_entries": 747, "num_filter_entries": 747, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764009197, "oldest_key_time": 1764009197, "file_creation_time": 1764009407, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 10269 microseconds, and 4291 cpu microseconds.
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.635269) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1769378 bytes OK
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.635284) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.636784) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.636798) EVENT_LOG_v1 {"time_micros": 1764009407636793, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.636814) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3122811, prev total WAL file size 3122811, number of live WAL files 2.
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.637646) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1727KB)], [29(7636KB)]
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009407637685, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9589458, "oldest_snapshot_seqno": -1}
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4048 keys, 7591176 bytes, temperature: kUnknown
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009407670493, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7591176, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7562329, "index_size": 17601, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 96270, "raw_average_key_size": 23, "raw_value_size": 7487559, "raw_average_value_size": 1849, "num_data_blocks": 767, "num_entries": 4048, "num_filter_entries": 4048, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764009407, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.670705) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7591176 bytes
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.671819) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 291.7 rd, 230.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.5 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(9.7) write-amplify(4.3) OK, records in: 4461, records dropped: 413 output_compression: NoCompression
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.671835) EVENT_LOG_v1 {"time_micros": 1764009407671826, "job": 12, "event": "compaction_finished", "compaction_time_micros": 32877, "compaction_time_cpu_micros": 15480, "output_level": 6, "num_output_files": 1, "total_output_size": 7591176, "num_input_records": 4461, "num_output_records": 4048, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009407672185, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009407673327, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.637576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.673395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.673400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.673402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.673403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:36:47 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:36:47.673404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:36:47 np0005533938 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 24 13:36:48 np0005533938 podman[188223]: 2025-11-24 18:36:48.015174476 +0000 UTC m=+0.103469810 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 13:36:49 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:51 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:51 np0005533938 podman[188250]: 2025-11-24 18:36:51.971152202 +0000 UTC m=+0.061498861 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 24 13:36:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:36:53 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:55 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:55 np0005533938 kernel: SELinux:  Converting 2769 SID table entries...
Nov 24 13:36:55 np0005533938 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 13:36:55 np0005533938 kernel: SELinux:  policy capability open_perms=1
Nov 24 13:36:55 np0005533938 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 13:36:55 np0005533938 kernel: SELinux:  policy capability always_check_network=0
Nov 24 13:36:55 np0005533938 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 13:36:55 np0005533938 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 13:36:55 np0005533938 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 13:36:57 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:36:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:36:59 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:01 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:37:03 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:37:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:37:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:37:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:37:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:37:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:37:05 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:07 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:37:09 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:11 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:37:13 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:15 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:17 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:37:18 np0005533938 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 24 13:37:19 np0005533938 podman[196015]: 2025-11-24 18:37:19.015682392 +0000 UTC m=+0.098182181 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 24 13:37:19 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:21 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:37:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:37:22.728 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:37:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:37:22.728 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:37:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:37:22.728 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:37:22 np0005533938 podman[198473]: 2025-11-24 18:37:22.988951143 +0000 UTC m=+0.079061742 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 24 13:37:23 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:25 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:27 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:37:29 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:31 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:37:33 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:37:34
Nov 24 13:37:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:37:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:37:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['default.rgw.log', 'images', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'volumes', 'backups']
Nov 24 13:37:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:37:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:37:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:37:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:37:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:37:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:37:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:37:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:37:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:37:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:37:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:37:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:37:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:37:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:37:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:37:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:37:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:37:35 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:37 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:37:39 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:41 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:37:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:37:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:37:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:37:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:37:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:37:42 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev d346a571-67bd-4922-a7fd-8305c0d2659e does not exist
Nov 24 13:37:42 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev f71744e7-58ec-4494-aaf1-3eb0a9283834 does not exist
Nov 24 13:37:42 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 9480c747-57b1-4ee9-9a50-486a48e45f37 does not exist
Nov 24 13:37:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:37:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:37:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:37:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:37:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:37:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:37:42 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:37:42 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:37:42 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:37:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:37:42 np0005533938 podman[205389]: 2025-11-24 18:37:42.704005139 +0000 UTC m=+0.034943970 container create ef999f3f438bc0f2bcf405e18b9ea15f2b508bead1cdede2a018c66b978e44b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hamilton, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 24 13:37:42 np0005533938 systemd[1]: Started libpod-conmon-ef999f3f438bc0f2bcf405e18b9ea15f2b508bead1cdede2a018c66b978e44b2.scope.
Nov 24 13:37:42 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:37:42 np0005533938 podman[205389]: 2025-11-24 18:37:42.687990776 +0000 UTC m=+0.018929637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:37:42 np0005533938 podman[205389]: 2025-11-24 18:37:42.795402005 +0000 UTC m=+0.126340896 container init ef999f3f438bc0f2bcf405e18b9ea15f2b508bead1cdede2a018c66b978e44b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Nov 24 13:37:42 np0005533938 podman[205389]: 2025-11-24 18:37:42.803649508 +0000 UTC m=+0.134588369 container start ef999f3f438bc0f2bcf405e18b9ea15f2b508bead1cdede2a018c66b978e44b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:37:42 np0005533938 podman[205389]: 2025-11-24 18:37:42.807956454 +0000 UTC m=+0.138895315 container attach ef999f3f438bc0f2bcf405e18b9ea15f2b508bead1cdede2a018c66b978e44b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 13:37:42 np0005533938 bold_hamilton[205405]: 167 167
Nov 24 13:37:42 np0005533938 systemd[1]: libpod-ef999f3f438bc0f2bcf405e18b9ea15f2b508bead1cdede2a018c66b978e44b2.scope: Deactivated successfully.
Nov 24 13:37:42 np0005533938 podman[205389]: 2025-11-24 18:37:42.809534543 +0000 UTC m=+0.140473374 container died ef999f3f438bc0f2bcf405e18b9ea15f2b508bead1cdede2a018c66b978e44b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 13:37:42 np0005533938 systemd[1]: var-lib-containers-storage-overlay-72810a9b26150906649f8686cbf6bcf2b141c26a058d7d6f5f8601f27723c13b-merged.mount: Deactivated successfully.
Nov 24 13:37:42 np0005533938 podman[205389]: 2025-11-24 18:37:42.84685117 +0000 UTC m=+0.177790001 container remove ef999f3f438bc0f2bcf405e18b9ea15f2b508bead1cdede2a018c66b978e44b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:37:42 np0005533938 systemd[1]: libpod-conmon-ef999f3f438bc0f2bcf405e18b9ea15f2b508bead1cdede2a018c66b978e44b2.scope: Deactivated successfully.
Nov 24 13:37:43 np0005533938 podman[205430]: 2025-11-24 18:37:43.006705259 +0000 UTC m=+0.039824400 container create 60bc5643dd4161386ab0f81ab1816d11a2841fd2ab891af0e890b62ee108516e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:37:43 np0005533938 systemd[1]: Started libpod-conmon-60bc5643dd4161386ab0f81ab1816d11a2841fd2ab891af0e890b62ee108516e.scope.
Nov 24 13:37:43 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:37:43 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9b766a7232bb221aa8b72ed1c77a941a6e521913cf3a6784a3bf11479bad9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:37:43 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9b766a7232bb221aa8b72ed1c77a941a6e521913cf3a6784a3bf11479bad9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:37:43 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9b766a7232bb221aa8b72ed1c77a941a6e521913cf3a6784a3bf11479bad9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:37:43 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9b766a7232bb221aa8b72ed1c77a941a6e521913cf3a6784a3bf11479bad9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:37:43 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9b766a7232bb221aa8b72ed1c77a941a6e521913cf3a6784a3bf11479bad9d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:37:43 np0005533938 podman[205430]: 2025-11-24 18:37:43.067748039 +0000 UTC m=+0.100867190 container init 60bc5643dd4161386ab0f81ab1816d11a2841fd2ab891af0e890b62ee108516e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:37:43 np0005533938 podman[205430]: 2025-11-24 18:37:43.077593831 +0000 UTC m=+0.110712972 container start 60bc5643dd4161386ab0f81ab1816d11a2841fd2ab891af0e890b62ee108516e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 13:37:43 np0005533938 podman[205430]: 2025-11-24 18:37:42.988687186 +0000 UTC m=+0.021806347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:37:43 np0005533938 podman[205430]: 2025-11-24 18:37:43.086051339 +0000 UTC m=+0.119170510 container attach 60bc5643dd4161386ab0f81ab1816d11a2841fd2ab891af0e890b62ee108516e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:37:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:37:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:37:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:37:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:37:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:37:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:37:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:37:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:37:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:37:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:37:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:37:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:37:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:37:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:37:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:37:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:37:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:37:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:37:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:37:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:37:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:37:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:37:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:37:43 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:44 np0005533938 practical_engelbart[205447]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:37:44 np0005533938 practical_engelbart[205447]: --> relative data size: 1.0
Nov 24 13:37:44 np0005533938 practical_engelbart[205447]: --> All data devices are unavailable
Nov 24 13:37:44 np0005533938 systemd[1]: libpod-60bc5643dd4161386ab0f81ab1816d11a2841fd2ab891af0e890b62ee108516e.scope: Deactivated successfully.
Nov 24 13:37:44 np0005533938 podman[205430]: 2025-11-24 18:37:44.061330929 +0000 UTC m=+1.094450130 container died 60bc5643dd4161386ab0f81ab1816d11a2841fd2ab891af0e890b62ee108516e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 13:37:44 np0005533938 systemd[1]: var-lib-containers-storage-overlay-bd9b766a7232bb221aa8b72ed1c77a941a6e521913cf3a6784a3bf11479bad9d-merged.mount: Deactivated successfully.
Nov 24 13:37:44 np0005533938 podman[205430]: 2025-11-24 18:37:44.141281254 +0000 UTC m=+1.174400405 container remove 60bc5643dd4161386ab0f81ab1816d11a2841fd2ab891af0e890b62ee108516e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:37:44 np0005533938 systemd[1]: libpod-conmon-60bc5643dd4161386ab0f81ab1816d11a2841fd2ab891af0e890b62ee108516e.scope: Deactivated successfully.
Nov 24 13:37:44 np0005533938 podman[205634]: 2025-11-24 18:37:44.834760438 +0000 UTC m=+0.025732213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:37:45 np0005533938 podman[205634]: 2025-11-24 18:37:45.158371392 +0000 UTC m=+0.349343127 container create b7f0d4beec705ef772805d92816e63461dfe53d13b17354a3816c7a45b8313ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 13:37:45 np0005533938 systemd[1]: Started libpod-conmon-b7f0d4beec705ef772805d92816e63461dfe53d13b17354a3816c7a45b8313ff.scope.
Nov 24 13:37:45 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:37:45 np0005533938 podman[205634]: 2025-11-24 18:37:45.308416379 +0000 UTC m=+0.499388194 container init b7f0d4beec705ef772805d92816e63461dfe53d13b17354a3816c7a45b8313ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jennings, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:37:45 np0005533938 podman[205634]: 2025-11-24 18:37:45.319762878 +0000 UTC m=+0.510734633 container start b7f0d4beec705ef772805d92816e63461dfe53d13b17354a3816c7a45b8313ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jennings, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:37:45 np0005533938 podman[205634]: 2025-11-24 18:37:45.32597489 +0000 UTC m=+0.516946655 container attach b7f0d4beec705ef772805d92816e63461dfe53d13b17354a3816c7a45b8313ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:37:45 np0005533938 infallible_jennings[205651]: 167 167
Nov 24 13:37:45 np0005533938 systemd[1]: libpod-b7f0d4beec705ef772805d92816e63461dfe53d13b17354a3816c7a45b8313ff.scope: Deactivated successfully.
Nov 24 13:37:45 np0005533938 podman[205634]: 2025-11-24 18:37:45.329165849 +0000 UTC m=+0.520137604 container died b7f0d4beec705ef772805d92816e63461dfe53d13b17354a3816c7a45b8313ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:37:45 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:45 np0005533938 systemd[1]: var-lib-containers-storage-overlay-4dd4e94291aa5e76257be40c7b100171adb8bdf341759975359e7ec4ba9ce6fd-merged.mount: Deactivated successfully.
Nov 24 13:37:45 np0005533938 podman[205634]: 2025-11-24 18:37:45.393831598 +0000 UTC m=+0.584803363 container remove b7f0d4beec705ef772805d92816e63461dfe53d13b17354a3816c7a45b8313ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:37:45 np0005533938 systemd[1]: libpod-conmon-b7f0d4beec705ef772805d92816e63461dfe53d13b17354a3816c7a45b8313ff.scope: Deactivated successfully.
Nov 24 13:37:45 np0005533938 kernel: SELinux:  Converting 2770 SID table entries...
Nov 24 13:37:45 np0005533938 kernel: SELinux:  policy capability network_peer_controls=1
Nov 24 13:37:45 np0005533938 kernel: SELinux:  policy capability open_perms=1
Nov 24 13:37:45 np0005533938 kernel: SELinux:  policy capability extended_socket_class=1
Nov 24 13:37:45 np0005533938 kernel: SELinux:  policy capability always_check_network=0
Nov 24 13:37:45 np0005533938 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 24 13:37:45 np0005533938 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 24 13:37:45 np0005533938 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 24 13:37:45 np0005533938 podman[205678]: 2025-11-24 18:37:45.63844979 +0000 UTC m=+0.074329978 container create 8f68951e2a8d71b94344147db363721adc298f0771926a4292bc2517d2208ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:37:45 np0005533938 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 24 13:37:45 np0005533938 podman[205678]: 2025-11-24 18:37:45.603232165 +0000 UTC m=+0.039112413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:37:45 np0005533938 systemd[1]: Started libpod-conmon-8f68951e2a8d71b94344147db363721adc298f0771926a4292bc2517d2208ddb.scope.
Nov 24 13:37:45 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:37:45 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f45e0f9b3867c044533c40556a124766346a25b86041dd36ca8be245b8d2d9ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:37:45 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f45e0f9b3867c044533c40556a124766346a25b86041dd36ca8be245b8d2d9ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:37:45 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f45e0f9b3867c044533c40556a124766346a25b86041dd36ca8be245b8d2d9ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:37:45 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f45e0f9b3867c044533c40556a124766346a25b86041dd36ca8be245b8d2d9ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:37:45 np0005533938 podman[205678]: 2025-11-24 18:37:45.784390967 +0000 UTC m=+0.220271195 container init 8f68951e2a8d71b94344147db363721adc298f0771926a4292bc2517d2208ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldwasser, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 13:37:45 np0005533938 podman[205678]: 2025-11-24 18:37:45.800616306 +0000 UTC m=+0.236496494 container start 8f68951e2a8d71b94344147db363721adc298f0771926a4292bc2517d2208ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldwasser, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 13:37:45 np0005533938 podman[205678]: 2025-11-24 18:37:45.807245819 +0000 UTC m=+0.243126067 container attach 8f68951e2a8d71b94344147db363721adc298f0771926a4292bc2517d2208ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldwasser, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]: {
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:    "0": [
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:        {
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "devices": [
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "/dev/loop3"
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            ],
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "lv_name": "ceph_lv0",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "lv_size": "21470642176",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "name": "ceph_lv0",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "tags": {
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.cluster_name": "ceph",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.crush_device_class": "",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.encrypted": "0",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.osd_id": "0",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.type": "block",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.vdo": "0"
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            },
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "type": "block",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "vg_name": "ceph_vg0"
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:        }
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:    ],
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:    "1": [
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:        {
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "devices": [
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "/dev/loop4"
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            ],
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "lv_name": "ceph_lv1",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "lv_size": "21470642176",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "name": "ceph_lv1",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "tags": {
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.cluster_name": "ceph",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.crush_device_class": "",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.encrypted": "0",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.osd_id": "1",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.type": "block",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.vdo": "0"
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            },
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "type": "block",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "vg_name": "ceph_vg1"
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:        }
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:    ],
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:    "2": [
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:        {
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "devices": [
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "/dev/loop5"
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            ],
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "lv_name": "ceph_lv2",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "lv_size": "21470642176",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "name": "ceph_lv2",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "tags": {
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.cluster_name": "ceph",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.crush_device_class": "",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.encrypted": "0",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.osd_id": "2",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.type": "block",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:                "ceph.vdo": "0"
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            },
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "type": "block",
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:            "vg_name": "ceph_vg2"
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:        }
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]:    ]
Nov 24 13:37:46 np0005533938 upbeat_goldwasser[205695]: }
Nov 24 13:37:46 np0005533938 systemd[1]: libpod-8f68951e2a8d71b94344147db363721adc298f0771926a4292bc2517d2208ddb.scope: Deactivated successfully.
Nov 24 13:37:46 np0005533938 podman[205678]: 2025-11-24 18:37:46.585959298 +0000 UTC m=+1.021839446 container died 8f68951e2a8d71b94344147db363721adc298f0771926a4292bc2517d2208ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldwasser, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 13:37:46 np0005533938 systemd[1]: var-lib-containers-storage-overlay-f45e0f9b3867c044533c40556a124766346a25b86041dd36ca8be245b8d2d9ab-merged.mount: Deactivated successfully.
Nov 24 13:37:46 np0005533938 podman[205678]: 2025-11-24 18:37:46.65115366 +0000 UTC m=+1.087033808 container remove 8f68951e2a8d71b94344147db363721adc298f0771926a4292bc2517d2208ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldwasser, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 13:37:46 np0005533938 systemd[1]: libpod-conmon-8f68951e2a8d71b94344147db363721adc298f0771926a4292bc2517d2208ddb.scope: Deactivated successfully.
Nov 24 13:37:46 np0005533938 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 24 13:37:46 np0005533938 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 24 13:37:47 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:47 np0005533938 podman[205878]: 2025-11-24 18:37:47.377301867 +0000 UTC m=+0.044834603 container create 84e47540d65121285b925a6d17fb5a217fe1a5fb613083dac775fe75ed7ca848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 13:37:47 np0005533938 systemd[1]: Started libpod-conmon-84e47540d65121285b925a6d17fb5a217fe1a5fb613083dac775fe75ed7ca848.scope.
Nov 24 13:37:47 np0005533938 podman[205878]: 2025-11-24 18:37:47.354735602 +0000 UTC m=+0.022268348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:37:47 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:37:47 np0005533938 podman[205878]: 2025-11-24 18:37:47.489845143 +0000 UTC m=+0.157377859 container init 84e47540d65121285b925a6d17fb5a217fe1a5fb613083dac775fe75ed7ca848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hamilton, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 24 13:37:47 np0005533938 podman[205878]: 2025-11-24 18:37:47.500593847 +0000 UTC m=+0.168126543 container start 84e47540d65121285b925a6d17fb5a217fe1a5fb613083dac775fe75ed7ca848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hamilton, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:37:47 np0005533938 podman[205878]: 2025-11-24 18:37:47.504301678 +0000 UTC m=+0.171834374 container attach 84e47540d65121285b925a6d17fb5a217fe1a5fb613083dac775fe75ed7ca848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hamilton, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:37:47 np0005533938 keen_hamilton[205894]: 167 167
Nov 24 13:37:47 np0005533938 systemd[1]: libpod-84e47540d65121285b925a6d17fb5a217fe1a5fb613083dac775fe75ed7ca848.scope: Deactivated successfully.
Nov 24 13:37:47 np0005533938 podman[205878]: 2025-11-24 18:37:47.509980708 +0000 UTC m=+0.177513404 container died 84e47540d65121285b925a6d17fb5a217fe1a5fb613083dac775fe75ed7ca848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 13:37:47 np0005533938 systemd[1]: var-lib-containers-storage-overlay-ffbfcd4c775ed4d3756cfcda6f4531bd165c8b0e082f89b7ab33a04da140a415-merged.mount: Deactivated successfully.
Nov 24 13:37:47 np0005533938 podman[205878]: 2025-11-24 18:37:47.54832311 +0000 UTC m=+0.215855806 container remove 84e47540d65121285b925a6d17fb5a217fe1a5fb613083dac775fe75ed7ca848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hamilton, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:37:47 np0005533938 systemd[1]: libpod-conmon-84e47540d65121285b925a6d17fb5a217fe1a5fb613083dac775fe75ed7ca848.scope: Deactivated successfully.
Nov 24 13:37:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:37:47 np0005533938 podman[205918]: 2025-11-24 18:37:47.790353579 +0000 UTC m=+0.079934816 container create 744a5be3af8fd9b3485b2b314b5121b98c828d383ace69c61a8a7e722b4ac298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_williams, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:37:47 np0005533938 podman[205918]: 2025-11-24 18:37:47.74931716 +0000 UTC m=+0.038898467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:37:47 np0005533938 systemd[1]: Started libpod-conmon-744a5be3af8fd9b3485b2b314b5121b98c828d383ace69c61a8a7e722b4ac298.scope.
Nov 24 13:37:47 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:37:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bd705771a980f4689033c54e71d2a3b95940f38b992feb2324a55b9a5aaf35e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:37:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bd705771a980f4689033c54e71d2a3b95940f38b992feb2324a55b9a5aaf35e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:37:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bd705771a980f4689033c54e71d2a3b95940f38b992feb2324a55b9a5aaf35e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:37:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bd705771a980f4689033c54e71d2a3b95940f38b992feb2324a55b9a5aaf35e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:37:47 np0005533938 podman[205918]: 2025-11-24 18:37:47.936334757 +0000 UTC m=+0.225916014 container init 744a5be3af8fd9b3485b2b314b5121b98c828d383ace69c61a8a7e722b4ac298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_williams, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:37:47 np0005533938 podman[205918]: 2025-11-24 18:37:47.953247792 +0000 UTC m=+0.242829019 container start 744a5be3af8fd9b3485b2b314b5121b98c828d383ace69c61a8a7e722b4ac298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_williams, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:37:47 np0005533938 podman[205918]: 2025-11-24 18:37:47.956336998 +0000 UTC m=+0.245918305 container attach 744a5be3af8fd9b3485b2b314b5121b98c828d383ace69c61a8a7e722b4ac298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_williams, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Nov 24 13:37:48 np0005533938 eager_williams[205945]: {
Nov 24 13:37:48 np0005533938 eager_williams[205945]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:37:48 np0005533938 eager_williams[205945]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:37:48 np0005533938 eager_williams[205945]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:37:48 np0005533938 eager_williams[205945]:        "osd_id": 0,
Nov 24 13:37:48 np0005533938 eager_williams[205945]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:37:48 np0005533938 eager_williams[205945]:        "type": "bluestore"
Nov 24 13:37:48 np0005533938 eager_williams[205945]:    },
Nov 24 13:37:48 np0005533938 eager_williams[205945]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:37:48 np0005533938 eager_williams[205945]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:37:48 np0005533938 eager_williams[205945]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:37:48 np0005533938 eager_williams[205945]:        "osd_id": 1,
Nov 24 13:37:48 np0005533938 eager_williams[205945]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:37:48 np0005533938 eager_williams[205945]:        "type": "bluestore"
Nov 24 13:37:48 np0005533938 eager_williams[205945]:    },
Nov 24 13:37:48 np0005533938 eager_williams[205945]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:37:48 np0005533938 eager_williams[205945]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:37:48 np0005533938 eager_williams[205945]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:37:48 np0005533938 eager_williams[205945]:        "osd_id": 2,
Nov 24 13:37:48 np0005533938 eager_williams[205945]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:37:48 np0005533938 eager_williams[205945]:        "type": "bluestore"
Nov 24 13:37:48 np0005533938 eager_williams[205945]:    }
Nov 24 13:37:48 np0005533938 eager_williams[205945]: }
Nov 24 13:37:48 np0005533938 systemd[1]: libpod-744a5be3af8fd9b3485b2b314b5121b98c828d383ace69c61a8a7e722b4ac298.scope: Deactivated successfully.
Nov 24 13:37:48 np0005533938 podman[205918]: 2025-11-24 18:37:48.952972052 +0000 UTC m=+1.242553279 container died 744a5be3af8fd9b3485b2b314b5121b98c828d383ace69c61a8a7e722b4ac298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_williams, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 13:37:48 np0005533938 systemd[1]: libpod-744a5be3af8fd9b3485b2b314b5121b98c828d383ace69c61a8a7e722b4ac298.scope: Consumed 1.003s CPU time.
Nov 24 13:37:48 np0005533938 systemd[1]: var-lib-containers-storage-overlay-4bd705771a980f4689033c54e71d2a3b95940f38b992feb2324a55b9a5aaf35e-merged.mount: Deactivated successfully.
Nov 24 13:37:49 np0005533938 podman[205918]: 2025-11-24 18:37:49.023319241 +0000 UTC m=+1.312900478 container remove 744a5be3af8fd9b3485b2b314b5121b98c828d383ace69c61a8a7e722b4ac298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 13:37:49 np0005533938 systemd[1]: libpod-conmon-744a5be3af8fd9b3485b2b314b5121b98c828d383ace69c61a8a7e722b4ac298.scope: Deactivated successfully.
Nov 24 13:37:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:37:49 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:37:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:37:49 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:37:49 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 21b44f78-fda7-4419-8831-ce4444ca722e does not exist
Nov 24 13:37:49 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 50d1240f-401e-4848-a114-ae4bf5eb5e75 does not exist
Nov 24 13:37:49 np0005533938 podman[206023]: 2025-11-24 18:37:49.151640435 +0000 UTC m=+0.092045033 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller)
Nov 24 13:37:49 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:50 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:37:50 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:37:51 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:37:53 np0005533938 podman[206280]: 2025-11-24 18:37:53.302599445 +0000 UTC m=+0.056164902 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:37:53 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:54 np0005533938 systemd[1]: Stopping OpenSSH server daemon...
Nov 24 13:37:54 np0005533938 systemd[1]: sshd.service: Deactivated successfully.
Nov 24 13:37:54 np0005533938 systemd[1]: Stopped OpenSSH server daemon.
Nov 24 13:37:54 np0005533938 systemd[1]: sshd.service: Consumed 2.668s CPU time, read 32.0K from disk, written 12.0K to disk.
Nov 24 13:37:54 np0005533938 systemd[1]: Stopped target sshd-keygen.target.
Nov 24 13:37:54 np0005533938 systemd[1]: Stopping sshd-keygen.target...
Nov 24 13:37:54 np0005533938 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 13:37:54 np0005533938 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 13:37:54 np0005533938 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 24 13:37:54 np0005533938 systemd[1]: Reached target sshd-keygen.target.
Nov 24 13:37:54 np0005533938 systemd[1]: Starting OpenSSH server daemon...
Nov 24 13:37:54 np0005533938 systemd[1]: Started OpenSSH server daemon.
Nov 24 13:37:55 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:56 np0005533938 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 13:37:56 np0005533938 systemd[1]: Starting man-db-cache-update.service...
Nov 24 13:37:56 np0005533938 systemd[1]: Reloading.
Nov 24 13:37:56 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:37:56 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:37:56 np0005533938 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 13:37:57 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:37:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:37:59 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:00 np0005533938 python3.9[211353]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 13:38:00 np0005533938 systemd[1]: Reloading.
Nov 24 13:38:00 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:38:00 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:38:01 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:01 np0005533938 python3.9[212689]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 13:38:02 np0005533938 systemd[1]: Reloading.
Nov 24 13:38:02 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:38:02 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:38:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:38:03 np0005533938 python3.9[214221]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 13:38:03 np0005533938 systemd[1]: Reloading.
Nov 24 13:38:03 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:38:03 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:38:03 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:04 np0005533938 python3.9[215612]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 13:38:04 np0005533938 systemd[1]: Reloading.
Nov 24 13:38:04 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:38:04 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:38:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:38:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:38:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:38:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:38:04 np0005533938 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 13:38:04 np0005533938 systemd[1]: Finished man-db-cache-update.service.
Nov 24 13:38:04 np0005533938 systemd[1]: man-db-cache-update.service: Consumed 10.344s CPU time.
Nov 24 13:38:04 np0005533938 systemd[1]: run-r4471f54cb0244fd18094487f21223860.service: Deactivated successfully.
Nov 24 13:38:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:38:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:38:05 np0005533938 python3.9[216458]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 13:38:05 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:05 np0005533938 systemd[1]: Reloading.
Nov 24 13:38:05 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:38:05 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:38:06 np0005533938 python3.9[216648]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 13:38:06 np0005533938 systemd[1]: Reloading.
Nov 24 13:38:06 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:38:06 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:38:07 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:38:07 np0005533938 python3.9[216839]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 13:38:08 np0005533938 systemd[1]: Reloading.
Nov 24 13:38:08 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:38:08 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:38:09 np0005533938 python3.9[217029]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 13:38:09 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:10 np0005533938 python3.9[217184]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 13:38:10 np0005533938 systemd[1]: Reloading.
Nov 24 13:38:10 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:38:10 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:38:11 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:11 np0005533938 python3.9[217374]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 24 13:38:11 np0005533938 systemd[1]: Reloading.
Nov 24 13:38:11 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:38:11 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:38:11 np0005533938 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 24 13:38:11 np0005533938 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.645940) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009492645962, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1141, "num_deletes": 506, "total_data_size": 1243822, "memory_usage": 1276688, "flush_reason": "Manual Compaction"}
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009492653169, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1232033, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13619, "largest_seqno": 14759, "table_properties": {"data_size": 1226918, "index_size": 2127, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 13285, "raw_average_key_size": 17, "raw_value_size": 1214807, "raw_average_value_size": 1624, "num_data_blocks": 97, "num_entries": 748, "num_filter_entries": 748, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764009408, "oldest_key_time": 1764009408, "file_creation_time": 1764009492, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 7261 microseconds, and 3337 cpu microseconds.
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.653200) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1232033 bytes OK
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.653214) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.654587) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.654635) EVENT_LOG_v1 {"time_micros": 1764009492654624, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.654659) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1237489, prev total WAL file size 1237489, number of live WAL files 2.
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.655354) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1203KB)], [32(7413KB)]
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009492655387, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 8823209, "oldest_snapshot_seqno": -1}
Nov 24 13:38:12 np0005533938 python3.9[217567]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3771 keys, 6894971 bytes, temperature: kUnknown
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009492692639, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 6894971, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6868308, "index_size": 16122, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 92610, "raw_average_key_size": 24, "raw_value_size": 6798551, "raw_average_value_size": 1802, "num_data_blocks": 683, "num_entries": 3771, "num_filter_entries": 3771, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764009492, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.692822) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 6894971 bytes
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.693984) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 236.4 rd, 184.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 7.2 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(12.8) write-amplify(5.6) OK, records in: 4796, records dropped: 1025 output_compression: NoCompression
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.693997) EVENT_LOG_v1 {"time_micros": 1764009492693991, "job": 14, "event": "compaction_finished", "compaction_time_micros": 37329, "compaction_time_cpu_micros": 20181, "output_level": 6, "num_output_files": 1, "total_output_size": 6894971, "num_input_records": 4796, "num_output_records": 3771, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009492694245, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009492695305, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.655283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.695365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.695371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.695375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.695377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:38:12 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:38:12.695378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:38:13 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:13 np0005533938 python3.9[217722]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 13:38:14 np0005533938 python3.9[217877]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 13:38:15 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:15 np0005533938 python3.9[218032]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 13:38:16 np0005533938 python3.9[218187]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 13:38:17 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:17 np0005533938 python3.9[218342]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 13:38:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:38:18 np0005533938 python3.9[218497]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 13:38:18 np0005533938 python3.9[218652]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 13:38:19 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:19 np0005533938 podman[218656]: 2025-11-24 18:38:19.993471596 +0000 UTC m=+0.091179632 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 24 13:38:20 np0005533938 python3.9[218833]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 13:38:21 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:21 np0005533938 python3.9[218988]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 13:38:22 np0005533938 python3.9[219143]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 13:38:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:38:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:38:22.729 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:38:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:38:22.729 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:38:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:38:22.729 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:38:23 np0005533938 python3.9[219298]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 13:38:23 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:23 np0005533938 podman[219425]: 2025-11-24 18:38:23.74911549 +0000 UTC m=+0.056001277 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 13:38:24 np0005533938 python3.9[219470]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 13:38:24 np0005533938 python3.9[219629]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 24 13:38:25 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:25 np0005533938 python3.9[219784]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:38:26 np0005533938 python3.9[219936]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:38:27 np0005533938 python3.9[220088]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:38:27 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:38:27 np0005533938 python3.9[220240]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:38:28 np0005533938 python3.9[220392]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:38:29 np0005533938 python3.9[220544]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:38:29 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:30 np0005533938 python3.9[220696]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:38:30 np0005533938 python3.9[220821]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764009509.433578-554-124272828743679/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:31 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:31 np0005533938 python3.9[220973]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:38:32 np0005533938 python3.9[221098]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764009511.111376-554-207048086746111/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:38:32 np0005533938 python3.9[221250]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:38:33 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:33 np0005533938 python3.9[221375]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764009512.3802423-554-187540077231187/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:34 np0005533938 python3.9[221527]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:38:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:38:34
Nov 24 13:38:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:38:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:38:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'default.rgw.meta', 'vms', '.rgw.root', '.mgr']
Nov 24 13:38:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:38:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:38:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:38:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:38:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:38:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:38:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:38:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:38:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:38:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:38:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:38:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:38:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:38:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:38:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:38:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:38:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:38:35 np0005533938 python3.9[221652]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764009513.716708-554-191935304423797/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:35 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:35 np0005533938 python3.9[221804]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:38:36 np0005533938 python3.9[221929]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764009515.1874418-554-141367386498368/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:37 np0005533938 python3.9[222081]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:38:37 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:37 np0005533938 python3.9[222206]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764009516.4135208-554-185674984620875/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:38:38 np0005533938 python3.9[222358]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:38:38 np0005533938 python3.9[222481]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764009517.7607574-554-45710354844842/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:39 np0005533938 python3.9[222633]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:38:39 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:39 np0005533938 python3.9[222758]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764009518.8937547-554-200368818621773/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:40 np0005533938 python3.9[222910]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 24 13:38:41 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:41 np0005533938 python3.9[223063]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:42 np0005533938 python3.9[223215]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:38:42 np0005533938 python3.9[223367]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:38:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:38:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:38:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:38:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:38:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:38:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:38:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:38:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:38:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:38:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:38:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:38:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:38:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:38:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:38:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:38:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:38:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:38:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:38:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:38:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:38:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:38:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:38:43 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:43 np0005533938 python3.9[223519]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:44 np0005533938 python3.9[223671]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:45 np0005533938 python3.9[223823]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:45 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:45 np0005533938 python3.9[223975]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:46 np0005533938 python3.9[224127]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:47 np0005533938 python3.9[224279]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:47 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:47 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:38:47 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3319 writes, 14K keys, 3319 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3319 writes, 3319 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1288 writes, 5829 keys, 1288 commit groups, 1.0 writes per commit group, ingest: 8.48 MB, 0.01 MB/s#012Interval WAL: 1288 writes, 1288 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     73.8      0.21              0.04         7    0.029       0      0       0.0       0.0#012  L6      1/0    6.58 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7    211.8    174.7      0.23              0.11         6    0.039     24K   3197       0.0       0.0#012 Sum      1/0    6.58 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.7    112.2    127.2      0.44              0.15        13    0.034     24K   3197       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9    178.8    179.3      0.19              0.09         8    0.024     17K   2466       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    211.8    174.7      0.23              0.11         6    0.039     24K   3197       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     74.2      0.20              0.04         6    0.034       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     28.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.015, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.05 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.4 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562af0cfd1f0#2 capacity: 308.00 MB usage: 1.64 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 7.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(106,1.43 MB,0.463709%) FilterBlock(14,75.42 KB,0.0239137%) IndexBlock(14,144.53 KB,0.0458259%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 24 13:38:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:38:47 np0005533938 python3.9[224431]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:48 np0005533938 python3.9[224583]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:49 np0005533938 python3.9[224735]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:49 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:49 np0005533938 python3.9[224999]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:38:49 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:38:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:38:49 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:38:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:38:49 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:38:49 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 779778df-0b31-40b2-bfc3-07d9dd01cab9 does not exist
Nov 24 13:38:49 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 9ec13bae-6fe7-4455-9c3c-366f65b5fd7a does not exist
Nov 24 13:38:49 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 6e7d6fff-388c-4c10-a450-3908d4aba869 does not exist
Nov 24 13:38:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:38:49 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:38:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:38:49 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:38:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:38:49 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:38:50 np0005533938 podman[225167]: 2025-11-24 18:38:50.119473722 +0000 UTC m=+0.074406510 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 13:38:50 np0005533938 podman[225332]: 2025-11-24 18:38:50.385150102 +0000 UTC m=+0.037343319 container create 2008f009803bf10ad344f029009360983c82c626f796eb68710207c468dc46e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elbakyan, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 13:38:50 np0005533938 python3.9[225308]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:50 np0005533938 systemd[1]: Started libpod-conmon-2008f009803bf10ad344f029009360983c82c626f796eb68710207c468dc46e6.scope.
Nov 24 13:38:50 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:38:50 np0005533938 podman[225332]: 2025-11-24 18:38:50.367254092 +0000 UTC m=+0.019447309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:38:50 np0005533938 podman[225332]: 2025-11-24 18:38:50.467755972 +0000 UTC m=+0.119949209 container init 2008f009803bf10ad344f029009360983c82c626f796eb68710207c468dc46e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elbakyan, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:38:50 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:38:50 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:38:50 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:38:50 np0005533938 podman[225332]: 2025-11-24 18:38:50.474706233 +0000 UTC m=+0.126899460 container start 2008f009803bf10ad344f029009360983c82c626f796eb68710207c468dc46e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 13:38:50 np0005533938 podman[225332]: 2025-11-24 18:38:50.47743322 +0000 UTC m=+0.129626437 container attach 2008f009803bf10ad344f029009360983c82c626f796eb68710207c468dc46e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:38:50 np0005533938 distracted_elbakyan[225348]: 167 167
Nov 24 13:38:50 np0005533938 systemd[1]: libpod-2008f009803bf10ad344f029009360983c82c626f796eb68710207c468dc46e6.scope: Deactivated successfully.
Nov 24 13:38:50 np0005533938 podman[225332]: 2025-11-24 18:38:50.479995463 +0000 UTC m=+0.132188680 container died 2008f009803bf10ad344f029009360983c82c626f796eb68710207c468dc46e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elbakyan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 13:38:50 np0005533938 systemd[1]: var-lib-containers-storage-overlay-5326b85218e25a95f1f66c7cbd3745005e8d9dda781d1b48181bbcf02684e010-merged.mount: Deactivated successfully.
Nov 24 13:38:50 np0005533938 podman[225332]: 2025-11-24 18:38:50.516785217 +0000 UTC m=+0.168978464 container remove 2008f009803bf10ad344f029009360983c82c626f796eb68710207c468dc46e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elbakyan, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 13:38:50 np0005533938 systemd[1]: libpod-conmon-2008f009803bf10ad344f029009360983c82c626f796eb68710207c468dc46e6.scope: Deactivated successfully.
Nov 24 13:38:50 np0005533938 podman[225409]: 2025-11-24 18:38:50.668438814 +0000 UTC m=+0.035575055 container create 2b595c31ab6a013a7c7ae6fe2ba30ad616dbeabe6cca3d203e79d4be9eb55ac1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:38:50 np0005533938 systemd[1]: Started libpod-conmon-2b595c31ab6a013a7c7ae6fe2ba30ad616dbeabe6cca3d203e79d4be9eb55ac1.scope.
Nov 24 13:38:50 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:38:50 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5503b19bc255470fa142be1169d58793f1658ddb636d631870973319afa75c60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:38:50 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5503b19bc255470fa142be1169d58793f1658ddb636d631870973319afa75c60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:38:50 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5503b19bc255470fa142be1169d58793f1658ddb636d631870973319afa75c60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:38:50 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5503b19bc255470fa142be1169d58793f1658ddb636d631870973319afa75c60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:38:50 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5503b19bc255470fa142be1169d58793f1658ddb636d631870973319afa75c60/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:38:50 np0005533938 podman[225409]: 2025-11-24 18:38:50.653313553 +0000 UTC m=+0.020449824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:38:50 np0005533938 podman[225409]: 2025-11-24 18:38:50.762911116 +0000 UTC m=+0.130047357 container init 2b595c31ab6a013a7c7ae6fe2ba30ad616dbeabe6cca3d203e79d4be9eb55ac1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_noether, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:38:50 np0005533938 podman[225409]: 2025-11-24 18:38:50.769376335 +0000 UTC m=+0.136512576 container start 2b595c31ab6a013a7c7ae6fe2ba30ad616dbeabe6cca3d203e79d4be9eb55ac1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_noether, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 13:38:50 np0005533938 podman[225409]: 2025-11-24 18:38:50.772259046 +0000 UTC m=+0.139395287 container attach 2b595c31ab6a013a7c7ae6fe2ba30ad616dbeabe6cca3d203e79d4be9eb55ac1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_noether, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:38:51 np0005533938 python3.9[225545]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:38:51 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:51 np0005533938 python3.9[225676]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009530.6421704-775-131328091318326/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:51 np0005533938 zealous_noether[225459]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:38:51 np0005533938 zealous_noether[225459]: --> relative data size: 1.0
Nov 24 13:38:51 np0005533938 zealous_noether[225459]: --> All data devices are unavailable
Nov 24 13:38:51 np0005533938 systemd[1]: libpod-2b595c31ab6a013a7c7ae6fe2ba30ad616dbeabe6cca3d203e79d4be9eb55ac1.scope: Deactivated successfully.
Nov 24 13:38:51 np0005533938 podman[225409]: 2025-11-24 18:38:51.830120096 +0000 UTC m=+1.197256347 container died 2b595c31ab6a013a7c7ae6fe2ba30ad616dbeabe6cca3d203e79d4be9eb55ac1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:38:51 np0005533938 systemd[1]: var-lib-containers-storage-overlay-5503b19bc255470fa142be1169d58793f1658ddb636d631870973319afa75c60-merged.mount: Deactivated successfully.
Nov 24 13:38:51 np0005533938 podman[225409]: 2025-11-24 18:38:51.881499518 +0000 UTC m=+1.248635749 container remove 2b595c31ab6a013a7c7ae6fe2ba30ad616dbeabe6cca3d203e79d4be9eb55ac1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_noether, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:38:51 np0005533938 systemd[1]: libpod-conmon-2b595c31ab6a013a7c7ae6fe2ba30ad616dbeabe6cca3d203e79d4be9eb55ac1.scope: Deactivated successfully.
Nov 24 13:38:52 np0005533938 python3.9[225955]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:38:52 np0005533938 podman[225996]: 2025-11-24 18:38:52.441087232 +0000 UTC m=+0.036829076 container create 06e5af4a84a0af670db3b254a47047b53685569d0ebeb51c07f79e4a0ce3d7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lewin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:38:52 np0005533938 systemd[1]: Started libpod-conmon-06e5af4a84a0af670db3b254a47047b53685569d0ebeb51c07f79e4a0ce3d7af.scope.
Nov 24 13:38:52 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:38:52 np0005533938 podman[225996]: 2025-11-24 18:38:52.516514536 +0000 UTC m=+0.112256430 container init 06e5af4a84a0af670db3b254a47047b53685569d0ebeb51c07f79e4a0ce3d7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lewin, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 24 13:38:52 np0005533938 podman[225996]: 2025-11-24 18:38:52.423834968 +0000 UTC m=+0.019576832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:38:52 np0005533938 podman[225996]: 2025-11-24 18:38:52.524174004 +0000 UTC m=+0.119915848 container start 06e5af4a84a0af670db3b254a47047b53685569d0ebeb51c07f79e4a0ce3d7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 13:38:52 np0005533938 podman[225996]: 2025-11-24 18:38:52.527493505 +0000 UTC m=+0.123235349 container attach 06e5af4a84a0af670db3b254a47047b53685569d0ebeb51c07f79e4a0ce3d7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lewin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 13:38:52 np0005533938 nice_lewin[226036]: 167 167
Nov 24 13:38:52 np0005533938 systemd[1]: libpod-06e5af4a84a0af670db3b254a47047b53685569d0ebeb51c07f79e4a0ce3d7af.scope: Deactivated successfully.
Nov 24 13:38:52 np0005533938 podman[225996]: 2025-11-24 18:38:52.529490895 +0000 UTC m=+0.125232739 container died 06e5af4a84a0af670db3b254a47047b53685569d0ebeb51c07f79e4a0ce3d7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lewin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 13:38:52 np0005533938 systemd[1]: var-lib-containers-storage-overlay-38f4347499c94109b158539fd4e1252d23003c5acd97bee8381dbe478a6232d0-merged.mount: Deactivated successfully.
Nov 24 13:38:52 np0005533938 podman[225996]: 2025-11-24 18:38:52.565698724 +0000 UTC m=+0.161440568 container remove 06e5af4a84a0af670db3b254a47047b53685569d0ebeb51c07f79e4a0ce3d7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 24 13:38:52 np0005533938 systemd[1]: libpod-conmon-06e5af4a84a0af670db3b254a47047b53685569d0ebeb51c07f79e4a0ce3d7af.scope: Deactivated successfully.
Nov 24 13:38:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:38:52 np0005533938 podman[226125]: 2025-11-24 18:38:52.711910168 +0000 UTC m=+0.035171385 container create 6f4a855f849bb35ea4a5d419ecdf141da02278f2a077b227547eae068138b3c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 24 13:38:52 np0005533938 systemd[1]: Started libpod-conmon-6f4a855f849bb35ea4a5d419ecdf141da02278f2a077b227547eae068138b3c8.scope.
Nov 24 13:38:52 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:38:52 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f1a8a91f95d160d4d5879a5197f16e344539ac8b4035c1aa583952789f23e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:38:52 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f1a8a91f95d160d4d5879a5197f16e344539ac8b4035c1aa583952789f23e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:38:52 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f1a8a91f95d160d4d5879a5197f16e344539ac8b4035c1aa583952789f23e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:38:52 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f1a8a91f95d160d4d5879a5197f16e344539ac8b4035c1aa583952789f23e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:38:52 np0005533938 podman[226125]: 2025-11-24 18:38:52.782750739 +0000 UTC m=+0.106011966 container init 6f4a855f849bb35ea4a5d419ecdf141da02278f2a077b227547eae068138b3c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_dhawan, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:38:52 np0005533938 podman[226125]: 2025-11-24 18:38:52.696162631 +0000 UTC m=+0.019423848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:38:52 np0005533938 podman[226125]: 2025-11-24 18:38:52.793440222 +0000 UTC m=+0.116701449 container start 6f4a855f849bb35ea4a5d419ecdf141da02278f2a077b227547eae068138b3c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_dhawan, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 24 13:38:52 np0005533938 podman[226125]: 2025-11-24 18:38:52.796561268 +0000 UTC m=+0.119822595 container attach 6f4a855f849bb35ea4a5d419ecdf141da02278f2a077b227547eae068138b3c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_dhawan, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:38:52 np0005533938 python3.9[226176]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009531.9187055-775-100223039296840/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:53 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]: {
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:    "0": [
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:        {
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "devices": [
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "/dev/loop3"
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            ],
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "lv_name": "ceph_lv0",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "lv_size": "21470642176",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "name": "ceph_lv0",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "tags": {
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.cluster_name": "ceph",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.crush_device_class": "",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.encrypted": "0",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.osd_id": "0",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.type": "block",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.vdo": "0"
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            },
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "type": "block",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "vg_name": "ceph_vg0"
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:        }
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:    ],
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:    "1": [
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:        {
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "devices": [
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "/dev/loop4"
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            ],
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "lv_name": "ceph_lv1",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "lv_size": "21470642176",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "name": "ceph_lv1",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "tags": {
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.cluster_name": "ceph",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.crush_device_class": "",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.encrypted": "0",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.osd_id": "1",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.type": "block",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.vdo": "0"
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            },
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "type": "block",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "vg_name": "ceph_vg1"
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:        }
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:    ],
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:    "2": [
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:        {
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "devices": [
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "/dev/loop5"
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            ],
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "lv_name": "ceph_lv2",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "lv_size": "21470642176",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "name": "ceph_lv2",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "tags": {
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.cluster_name": "ceph",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.crush_device_class": "",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.encrypted": "0",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.osd_id": "2",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.type": "block",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:                "ceph.vdo": "0"
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            },
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "type": "block",
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:            "vg_name": "ceph_vg2"
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:        }
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]:    ]
Nov 24 13:38:53 np0005533938 thirsty_dhawan[226172]: }
Nov 24 13:38:53 np0005533938 systemd[1]: libpod-6f4a855f849bb35ea4a5d419ecdf141da02278f2a077b227547eae068138b3c8.scope: Deactivated successfully.
Nov 24 13:38:53 np0005533938 podman[226125]: 2025-11-24 18:38:53.545518665 +0000 UTC m=+0.868779892 container died 6f4a855f849bb35ea4a5d419ecdf141da02278f2a077b227547eae068138b3c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_dhawan, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:38:53 np0005533938 python3.9[226330]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:38:53 np0005533938 systemd[1]: var-lib-containers-storage-overlay-97f1a8a91f95d160d4d5879a5197f16e344539ac8b4035c1aa583952789f23e9-merged.mount: Deactivated successfully.
Nov 24 13:38:53 np0005533938 podman[226125]: 2025-11-24 18:38:53.61122404 +0000 UTC m=+0.934485257 container remove 6f4a855f849bb35ea4a5d419ecdf141da02278f2a077b227547eae068138b3c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_dhawan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:38:53 np0005533938 systemd[1]: libpod-conmon-6f4a855f849bb35ea4a5d419ecdf141da02278f2a077b227547eae068138b3c8.scope: Deactivated successfully.
Nov 24 13:38:53 np0005533938 podman[226490]: 2025-11-24 18:38:53.852932561 +0000 UTC m=+0.053506697 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true)
Nov 24 13:38:54 np0005533938 podman[226630]: 2025-11-24 18:38:54.112187462 +0000 UTC m=+0.033259768 container create 915a54bd6435a92343f633515d1b748e4edb04bfee0cc0f4a5ef93c44f18062c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_colden, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:38:54 np0005533938 python3.9[226588]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009533.1193957-775-46771593594714/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:54 np0005533938 systemd[1]: Started libpod-conmon-915a54bd6435a92343f633515d1b748e4edb04bfee0cc0f4a5ef93c44f18062c.scope.
Nov 24 13:38:54 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:38:54 np0005533938 podman[226630]: 2025-11-24 18:38:54.165429441 +0000 UTC m=+0.086501767 container init 915a54bd6435a92343f633515d1b748e4edb04bfee0cc0f4a5ef93c44f18062c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_colden, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 13:38:54 np0005533938 podman[226630]: 2025-11-24 18:38:54.172606667 +0000 UTC m=+0.093678973 container start 915a54bd6435a92343f633515d1b748e4edb04bfee0cc0f4a5ef93c44f18062c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:38:54 np0005533938 podman[226630]: 2025-11-24 18:38:54.175531289 +0000 UTC m=+0.096603595 container attach 915a54bd6435a92343f633515d1b748e4edb04bfee0cc0f4a5ef93c44f18062c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_colden, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 13:38:54 np0005533938 mystifying_colden[226646]: 167 167
Nov 24 13:38:54 np0005533938 systemd[1]: libpod-915a54bd6435a92343f633515d1b748e4edb04bfee0cc0f4a5ef93c44f18062c.scope: Deactivated successfully.
Nov 24 13:38:54 np0005533938 podman[226630]: 2025-11-24 18:38:54.177430306 +0000 UTC m=+0.098502612 container died 915a54bd6435a92343f633515d1b748e4edb04bfee0cc0f4a5ef93c44f18062c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_colden, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:38:54 np0005533938 podman[226630]: 2025-11-24 18:38:54.097784968 +0000 UTC m=+0.018857294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:38:54 np0005533938 systemd[1]: var-lib-containers-storage-overlay-fbff84cb96da295f2067e22c3abad70b53a897777e06c905b56c544edb1f7ff7-merged.mount: Deactivated successfully.
Nov 24 13:38:54 np0005533938 podman[226630]: 2025-11-24 18:38:54.211520454 +0000 UTC m=+0.132592760 container remove 915a54bd6435a92343f633515d1b748e4edb04bfee0cc0f4a5ef93c44f18062c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_colden, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 13:38:54 np0005533938 systemd[1]: libpod-conmon-915a54bd6435a92343f633515d1b748e4edb04bfee0cc0f4a5ef93c44f18062c.scope: Deactivated successfully.
Nov 24 13:38:54 np0005533938 podman[226720]: 2025-11-24 18:38:54.374320545 +0000 UTC m=+0.044618068 container create fc26ddf311eed8216313d5c4c816546ebfb748364f2ee0d9fa3d1cdee4a9175e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:38:54 np0005533938 systemd[1]: Started libpod-conmon-fc26ddf311eed8216313d5c4c816546ebfb748364f2ee0d9fa3d1cdee4a9175e.scope.
Nov 24 13:38:54 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:38:54 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06daddb67b93084d4a471ee6fff13745efb3237bbdc485cfab7b698a2557d416/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:38:54 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06daddb67b93084d4a471ee6fff13745efb3237bbdc485cfab7b698a2557d416/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:38:54 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06daddb67b93084d4a471ee6fff13745efb3237bbdc485cfab7b698a2557d416/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:38:54 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06daddb67b93084d4a471ee6fff13745efb3237bbdc485cfab7b698a2557d416/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:38:54 np0005533938 podman[226720]: 2025-11-24 18:38:54.356408835 +0000 UTC m=+0.026706408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:38:54 np0005533938 podman[226720]: 2025-11-24 18:38:54.452580378 +0000 UTC m=+0.122877921 container init fc26ddf311eed8216313d5c4c816546ebfb748364f2ee0d9fa3d1cdee4a9175e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 13:38:54 np0005533938 podman[226720]: 2025-11-24 18:38:54.461005485 +0000 UTC m=+0.131303008 container start fc26ddf311eed8216313d5c4c816546ebfb748364f2ee0d9fa3d1cdee4a9175e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lalande, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 13:38:54 np0005533938 podman[226720]: 2025-11-24 18:38:54.464779828 +0000 UTC m=+0.135077361 container attach fc26ddf311eed8216313d5c4c816546ebfb748364f2ee0d9fa3d1cdee4a9175e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lalande, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 13:38:54 np0005533938 python3.9[226843]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:38:55 np0005533938 python3.9[226970]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009534.2862272-775-9405211155211/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:55 np0005533938 cranky_lalande[226786]: {
Nov 24 13:38:55 np0005533938 cranky_lalande[226786]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:38:55 np0005533938 cranky_lalande[226786]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:38:55 np0005533938 cranky_lalande[226786]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:38:55 np0005533938 cranky_lalande[226786]:        "osd_id": 0,
Nov 24 13:38:55 np0005533938 cranky_lalande[226786]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:38:55 np0005533938 cranky_lalande[226786]:        "type": "bluestore"
Nov 24 13:38:55 np0005533938 cranky_lalande[226786]:    },
Nov 24 13:38:55 np0005533938 cranky_lalande[226786]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:38:55 np0005533938 cranky_lalande[226786]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:38:55 np0005533938 cranky_lalande[226786]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:38:55 np0005533938 cranky_lalande[226786]:        "osd_id": 1,
Nov 24 13:38:55 np0005533938 cranky_lalande[226786]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:38:55 np0005533938 cranky_lalande[226786]:        "type": "bluestore"
Nov 24 13:38:55 np0005533938 cranky_lalande[226786]:    },
Nov 24 13:38:55 np0005533938 cranky_lalande[226786]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:38:55 np0005533938 cranky_lalande[226786]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:38:55 np0005533938 cranky_lalande[226786]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:38:55 np0005533938 cranky_lalande[226786]:        "osd_id": 2,
Nov 24 13:38:55 np0005533938 cranky_lalande[226786]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:38:55 np0005533938 cranky_lalande[226786]:        "type": "bluestore"
Nov 24 13:38:55 np0005533938 cranky_lalande[226786]:    }
Nov 24 13:38:55 np0005533938 cranky_lalande[226786]: }
Nov 24 13:38:55 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:55 np0005533938 systemd[1]: libpod-fc26ddf311eed8216313d5c4c816546ebfb748364f2ee0d9fa3d1cdee4a9175e.scope: Deactivated successfully.
Nov 24 13:38:55 np0005533938 conmon[226786]: conmon fc26ddf311eed8216313 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc26ddf311eed8216313d5c4c816546ebfb748364f2ee0d9fa3d1cdee4a9175e.scope/container/memory.events
Nov 24 13:38:55 np0005533938 podman[226720]: 2025-11-24 18:38:55.384066642 +0000 UTC m=+1.054364175 container died fc26ddf311eed8216313d5c4c816546ebfb748364f2ee0d9fa3d1cdee4a9175e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:38:55 np0005533938 systemd[1]: var-lib-containers-storage-overlay-06daddb67b93084d4a471ee6fff13745efb3237bbdc485cfab7b698a2557d416-merged.mount: Deactivated successfully.
Nov 24 13:38:55 np0005533938 podman[226720]: 2025-11-24 18:38:55.443190045 +0000 UTC m=+1.113487588 container remove fc26ddf311eed8216313d5c4c816546ebfb748364f2ee0d9fa3d1cdee4a9175e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lalande, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 13:38:55 np0005533938 systemd[1]: libpod-conmon-fc26ddf311eed8216313d5c4c816546ebfb748364f2ee0d9fa3d1cdee4a9175e.scope: Deactivated successfully.
Nov 24 13:38:55 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:38:55 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:38:55 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:38:55 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:38:55 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 5537b135-1332-42aa-9cf9-d6825d137d5c does not exist
Nov 24 13:38:55 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 256a0b64-18cf-4958-96f5-b18fe8fa6809 does not exist
Nov 24 13:38:55 np0005533938 python3.9[227208]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:38:56 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:38:56 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:38:56 np0005533938 python3.9[227331]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009535.4879532-775-142239690940572/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:57 np0005533938 python3.9[227483]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:38:57 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:38:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:38:57 np0005533938 python3.9[227606]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009536.6571999-775-98315007989053/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:58 np0005533938 python3.9[227758]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:38:58 np0005533938 python3.9[227881]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009537.847678-775-249069123223463/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:38:59 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:00 np0005533938 python3.9[228033]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:39:00 np0005533938 python3.9[228156]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009539.039171-775-84894498910282/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:01 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:01 np0005533938 python3.9[228308]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:39:02 np0005533938 python3.9[228431]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009541.0799706-775-8435775658737/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:39:02 np0005533938 python3.9[228583]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:39:03 np0005533938 python3.9[228706]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009542.3209498-775-9963113335835/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:03 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:04 np0005533938 python3.9[228858]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:39:04 np0005533938 python3.9[228981]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009543.4839156-775-177273951310893/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:39:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:39:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:39:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:39:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:39:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:39:05 np0005533938 python3.9[229133]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:39:05 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:05 np0005533938 python3.9[229256]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009544.7428737-775-146826171813683/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:06 np0005533938 python3.9[229408]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:39:06 np0005533938 python3.9[229531]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009545.9110618-775-213600914494861/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:07 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:07 np0005533938 python3.9[229683]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:39:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:39:08 np0005533938 python3.9[229806]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009547.1536822-775-92031344019890/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:09 np0005533938 python3.9[229956]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:39:09 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:10 np0005533938 python3.9[230111]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 24 13:39:11 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:39:13 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:13 np0005533938 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 24 13:39:13 np0005533938 python3.9[230268]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:14 np0005533938 python3.9[230420]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:14 np0005533938 python3.9[230572]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:15 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:15 np0005533938 python3.9[230724]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:16 np0005533938 python3.9[230876]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:17 np0005533938 python3.9[231028]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:17 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:17 np0005533938 python3.9[231180]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:39:18 np0005533938 python3.9[231332]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:18 np0005533938 python3.9[231484]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:19 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:19 np0005533938 python3.9[231636]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:20 np0005533938 podman[231760]: 2025-11-24 18:39:20.379036612 +0000 UTC m=+0.120064582 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 24 13:39:20 np0005533938 python3.9[231808]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 13:39:20 np0005533938 systemd[1]: Reloading.
Nov 24 13:39:20 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:39:20 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:39:21 np0005533938 systemd[1]: Starting libvirt logging daemon socket...
Nov 24 13:39:21 np0005533938 systemd[1]: Listening on libvirt logging daemon socket.
Nov 24 13:39:21 np0005533938 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 24 13:39:21 np0005533938 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 24 13:39:21 np0005533938 systemd[1]: Starting libvirt logging daemon...
Nov 24 13:39:21 np0005533938 systemd[1]: Started libvirt logging daemon.
Nov 24 13:39:21 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:22 np0005533938 python3.9[232008]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 13:39:22 np0005533938 systemd[1]: Reloading.
Nov 24 13:39:22 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:39:22 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:39:22 np0005533938 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 24 13:39:22 np0005533938 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 24 13:39:22 np0005533938 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 24 13:39:22 np0005533938 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 24 13:39:22 np0005533938 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 24 13:39:22 np0005533938 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 24 13:39:22 np0005533938 systemd[1]: Starting libvirt nodedev daemon...
Nov 24 13:39:22 np0005533938 systemd[1]: Started libvirt nodedev daemon.
Nov 24 13:39:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:39:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:39:22.731 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:39:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:39:22.733 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:39:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:39:22.733 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:39:23 np0005533938 python3.9[232224]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 13:39:23 np0005533938 systemd[1]: Reloading.
Nov 24 13:39:23 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:23 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:39:23 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:39:23 np0005533938 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 24 13:39:23 np0005533938 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 24 13:39:23 np0005533938 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 24 13:39:23 np0005533938 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 24 13:39:23 np0005533938 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 24 13:39:23 np0005533938 systemd[1]: Starting libvirt proxy daemon...
Nov 24 13:39:23 np0005533938 systemd[1]: Started libvirt proxy daemon.
Nov 24 13:39:23 np0005533938 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 24 13:39:23 np0005533938 podman[232332]: 2025-11-24 18:39:23.978889467 +0000 UTC m=+0.065621024 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 13:39:24 np0005533938 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 24 13:39:24 np0005533938 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 24 13:39:24 np0005533938 python3.9[232462]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 13:39:24 np0005533938 systemd[1]: Reloading.
Nov 24 13:39:24 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:39:24 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:39:24 np0005533938 systemd[1]: Listening on libvirt locking daemon socket.
Nov 24 13:39:24 np0005533938 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 24 13:39:24 np0005533938 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 24 13:39:24 np0005533938 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 24 13:39:24 np0005533938 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 24 13:39:24 np0005533938 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 24 13:39:24 np0005533938 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 24 13:39:24 np0005533938 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 24 13:39:24 np0005533938 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 24 13:39:24 np0005533938 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 24 13:39:24 np0005533938 systemd[1]: Starting libvirt QEMU daemon...
Nov 24 13:39:24 np0005533938 systemd[1]: Started libvirt QEMU daemon.
Nov 24 13:39:25 np0005533938 setroubleshoot[232261]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 551798ee-1cc8-459a-bbd8-c0d1c25a6a72
Nov 24 13:39:25 np0005533938 setroubleshoot[232261]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 24 13:39:25 np0005533938 setroubleshoot[232261]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 551798ee-1cc8-459a-bbd8-c0d1c25a6a72
Nov 24 13:39:25 np0005533938 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 13:39:25 np0005533938 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 13:39:25 np0005533938 setroubleshoot[232261]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 24 13:39:25 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:25 np0005533938 python3.9[232680]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 13:39:25 np0005533938 systemd[1]: Reloading.
Nov 24 13:39:26 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:39:26 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:39:26 np0005533938 systemd[1]: Starting libvirt secret daemon socket...
Nov 24 13:39:26 np0005533938 systemd[1]: Listening on libvirt secret daemon socket.
Nov 24 13:39:26 np0005533938 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 24 13:39:26 np0005533938 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 24 13:39:26 np0005533938 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 24 13:39:26 np0005533938 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 24 13:39:26 np0005533938 systemd[1]: Starting libvirt secret daemon...
Nov 24 13:39:26 np0005533938 systemd[1]: Started libvirt secret daemon.
Nov 24 13:39:27 np0005533938 python3.9[232891]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:27 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:39:27 np0005533938 python3.9[233043]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 13:39:28 np0005533938 python3.9[233195]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:39:29 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:29 np0005533938 python3.9[233349]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 13:39:30 np0005533938 python3.9[233499]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:39:30 np0005533938 python3.9[233620]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009569.829705-1133-202693258846034/.source.xml follow=False _original_basename=secret.xml.j2 checksum=c2fabb65dd6b649e2c3b161b54086479a3dfe11a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:31 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:31 np0005533938 python3.9[233772]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine e5ee928f-099b-569b-93c9-ecf025cbb50d#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:39:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:39:32 np0005533938 python3.9[233934]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:33 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:39:34
Nov 24 13:39:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:39:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:39:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'volumes', 'vms']
Nov 24 13:39:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:39:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:39:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:39:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:39:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:39:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:39:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:39:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:39:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:39:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:39:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:39:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:39:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:39:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:39:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:39:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:39:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:39:35 np0005533938 python3.9[234397]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:35 np0005533938 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 24 13:39:35 np0005533938 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 24 13:39:35 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:35 np0005533938 python3.9[234549]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:39:36 np0005533938 python3.9[234672]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009575.2489197-1188-126197433940445/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:37 np0005533938 python3.9[234824]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:37 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:39:37 np0005533938 python3.9[234976]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:39:38 np0005533938 python3.9[235054]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:38 np0005533938 python3.9[235206]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:39:39 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:39 np0005533938 python3.9[235284]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.wsujy5u1 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:40 np0005533938 python3.9[235436]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:39:40 np0005533938 python3.9[235514]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:41 np0005533938 python3.9[235666]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:39:41 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:42 np0005533938 python3[235819]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 24 13:39:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:39:43 np0005533938 python3.9[235971]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:39:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:39:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:39:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:39:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:39:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:39:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:39:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:39:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:39:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:39:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:39:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:39:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:39:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:39:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:39:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:39:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:39:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:39:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:39:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:39:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:39:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:39:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:39:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:39:43 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:43 np0005533938 python3.9[236049]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:44 np0005533938 python3.9[236201]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:39:44 np0005533938 python3.9[236279]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:45 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:45 np0005533938 python3.9[236431]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:39:46 np0005533938 python3.9[236509]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:47 np0005533938 python3.9[236661]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:39:47 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:47 np0005533938 python3.9[236739]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:39:48 np0005533938 python3.9[236891]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:39:48 np0005533938 python3.9[237016]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764009587.7336202-1313-59936813877611/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:49 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:49 np0005533938 python3.9[237168]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:50 np0005533938 python3.9[237320]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:39:51 np0005533938 podman[237400]: 2025-11-24 18:39:51.027587555 +0000 UTC m=+0.113153853 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 24 13:39:51 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:51 np0005533938 python3.9[237502]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:52 np0005533938 python3.9[237654]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:39:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:39:52 np0005533938 python3.9[237807]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:39:53 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:53 np0005533938 python3.9[237961]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:39:54 np0005533938 podman[238088]: 2025-11-24 18:39:54.342050132 +0000 UTC m=+0.051516577 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:39:54 np0005533938 python3.9[238136]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:55 np0005533938 python3.9[238288]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:39:55 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:56 np0005533938 python3.9[238461]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009594.8343933-1385-15323831221177/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:56 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:39:56 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:39:56 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:39:56 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:39:56 np0005533938 python3.9[238688]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:39:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:39:57 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:39:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:39:57 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:39:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:39:57 np0005533938 python3.9[238925]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009596.216751-1400-108416072883157/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:57 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:57 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:39:57 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:39:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:39:57 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:39:57 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 0011bb82-7580-4ec0-ae7d-afb17da63679 does not exist
Nov 24 13:39:57 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 4d92a59b-123b-41b8-bf93-83aa14acf33e does not exist
Nov 24 13:39:57 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 79bb678b-ee33-4828-935d-8e3f7af82ff5 does not exist
Nov 24 13:39:58 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:39:58 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:39:58 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:39:58 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:39:58 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:39:58 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:39:58 np0005533938 python3.9[239089]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:39:59 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:39:59 np0005533938 podman[239332]: 2025-11-24 18:39:59.298604639 +0000 UTC m=+0.023195455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:39:59 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:39:59 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:39:59 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:39:59 np0005533938 python3.9[239367]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009597.576854-1415-126753289083347/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:39:59 np0005533938 podman[239332]: 2025-11-24 18:39:59.702142013 +0000 UTC m=+0.426732819 container create c30801816852cb4a42db06425d28fa2c8b06d9eb23a8435fc3e7c7abf2a3a736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 13:39:59 np0005533938 systemd[1]: Started libpod-conmon-c30801816852cb4a42db06425d28fa2c8b06d9eb23a8435fc3e7c7abf2a3a736.scope.
Nov 24 13:39:59 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:39:59 np0005533938 podman[239332]: 2025-11-24 18:39:59.985959303 +0000 UTC m=+0.710550079 container init c30801816852cb4a42db06425d28fa2c8b06d9eb23a8435fc3e7c7abf2a3a736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 13:39:59 np0005533938 podman[239332]: 2025-11-24 18:39:59.996093954 +0000 UTC m=+0.720684750 container start c30801816852cb4a42db06425d28fa2c8b06d9eb23a8435fc3e7c7abf2a3a736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_moore, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 13:40:00 np0005533938 keen_moore[239446]: 167 167
Nov 24 13:40:00 np0005533938 systemd[1]: libpod-c30801816852cb4a42db06425d28fa2c8b06d9eb23a8435fc3e7c7abf2a3a736.scope: Deactivated successfully.
Nov 24 13:40:00 np0005533938 podman[239332]: 2025-11-24 18:40:00.053891205 +0000 UTC m=+0.778482011 container attach c30801816852cb4a42db06425d28fa2c8b06d9eb23a8435fc3e7c7abf2a3a736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:40:00 np0005533938 podman[239332]: 2025-11-24 18:40:00.054701195 +0000 UTC m=+0.779292001 container died c30801816852cb4a42db06425d28fa2c8b06d9eb23a8435fc3e7c7abf2a3a736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_moore, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:40:00 np0005533938 systemd[1]: var-lib-containers-storage-overlay-776900ec71b72844c6bcb2fa95ca11e3833bee21a562c9b69d2b544c56ae8490-merged.mount: Deactivated successfully.
Nov 24 13:40:00 np0005533938 python3.9[239535]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:40:00 np0005533938 systemd[1]: Reloading.
Nov 24 13:40:00 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:40:00 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:40:01 np0005533938 systemd[1]: Reached target edpm_libvirt.target.
Nov 24 13:40:01 np0005533938 podman[239332]: 2025-11-24 18:40:01.133243957 +0000 UTC m=+1.857834723 container remove c30801816852cb4a42db06425d28fa2c8b06d9eb23a8435fc3e7c7abf2a3a736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_moore, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:40:01 np0005533938 systemd[1]: libpod-conmon-c30801816852cb4a42db06425d28fa2c8b06d9eb23a8435fc3e7c7abf2a3a736.scope: Deactivated successfully.
Nov 24 13:40:01 np0005533938 podman[239630]: 2025-11-24 18:40:01.268364953 +0000 UTC m=+0.025477841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:40:01 np0005533938 podman[239630]: 2025-11-24 18:40:01.373492156 +0000 UTC m=+0.130604994 container create c5d208456c463172646b85e188b37d5061841d531268e83e4a3f72ba1ae84a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_feynman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 24 13:40:01 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:01 np0005533938 systemd[1]: Started libpod-conmon-c5d208456c463172646b85e188b37d5061841d531268e83e4a3f72ba1ae84a19.scope.
Nov 24 13:40:01 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:40:01 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afff9de9e6ebc806885f760a22a5f358e5d1ef235662e30ced429a6afce0330e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:40:01 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afff9de9e6ebc806885f760a22a5f358e5d1ef235662e30ced429a6afce0330e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:40:01 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afff9de9e6ebc806885f760a22a5f358e5d1ef235662e30ced429a6afce0330e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:40:01 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afff9de9e6ebc806885f760a22a5f358e5d1ef235662e30ced429a6afce0330e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:40:01 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afff9de9e6ebc806885f760a22a5f358e5d1ef235662e30ced429a6afce0330e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:40:01 np0005533938 podman[239630]: 2025-11-24 18:40:01.627822865 +0000 UTC m=+0.384935723 container init c5d208456c463172646b85e188b37d5061841d531268e83e4a3f72ba1ae84a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_feynman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 13:40:01 np0005533938 podman[239630]: 2025-11-24 18:40:01.638200472 +0000 UTC m=+0.395313320 container start c5d208456c463172646b85e188b37d5061841d531268e83e4a3f72ba1ae84a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 13:40:01 np0005533938 podman[239630]: 2025-11-24 18:40:01.746082634 +0000 UTC m=+0.503195482 container attach c5d208456c463172646b85e188b37d5061841d531268e83e4a3f72ba1ae84a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 24 13:40:01 np0005533938 python3.9[239750]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 24 13:40:01 np0005533938 systemd[1]: Reloading.
Nov 24 13:40:01 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:40:01 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:40:02 np0005533938 systemd[1]: Reloading.
Nov 24 13:40:02 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:40:02 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:40:02 np0005533938 recursing_feynman[239751]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:40:02 np0005533938 recursing_feynman[239751]: --> relative data size: 1.0
Nov 24 13:40:02 np0005533938 recursing_feynman[239751]: --> All data devices are unavailable
Nov 24 13:40:02 np0005533938 systemd[1]: libpod-c5d208456c463172646b85e188b37d5061841d531268e83e4a3f72ba1ae84a19.scope: Deactivated successfully.
Nov 24 13:40:02 np0005533938 podman[239630]: 2025-11-24 18:40:02.611120688 +0000 UTC m=+1.368233516 container died c5d208456c463172646b85e188b37d5061841d531268e83e4a3f72ba1ae84a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_feynman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 24 13:40:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:40:02 np0005533938 systemd[1]: var-lib-containers-storage-overlay-afff9de9e6ebc806885f760a22a5f358e5d1ef235662e30ced429a6afce0330e-merged.mount: Deactivated successfully.
Nov 24 13:40:02 np0005533938 systemd-logind[822]: Session 51 logged out. Waiting for processes to exit.
Nov 24 13:40:02 np0005533938 systemd[1]: session-51.scope: Deactivated successfully.
Nov 24 13:40:02 np0005533938 systemd[1]: session-51.scope: Consumed 3min 22.511s CPU time.
Nov 24 13:40:02 np0005533938 systemd-logind[822]: Removed session 51.
Nov 24 13:40:03 np0005533938 podman[239630]: 2025-11-24 18:40:03.012246123 +0000 UTC m=+1.769358951 container remove c5d208456c463172646b85e188b37d5061841d531268e83e4a3f72ba1ae84a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_feynman, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:40:03 np0005533938 systemd[1]: libpod-conmon-c5d208456c463172646b85e188b37d5061841d531268e83e4a3f72ba1ae84a19.scope: Deactivated successfully.
Nov 24 13:40:03 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:03 np0005533938 podman[240030]: 2025-11-24 18:40:03.698578641 +0000 UTC m=+0.020932410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:40:03 np0005533938 podman[240030]: 2025-11-24 18:40:03.87704117 +0000 UTC m=+0.199394909 container create 13bd945d47e4d22a600cb5918d8ea600269311a0ed5d84409015d1ac460cef46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heyrovsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 13:40:04 np0005533938 systemd[1]: Started libpod-conmon-13bd945d47e4d22a600cb5918d8ea600269311a0ed5d84409015d1ac460cef46.scope.
Nov 24 13:40:04 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:40:04 np0005533938 podman[240030]: 2025-11-24 18:40:04.257464552 +0000 UTC m=+0.579818311 container init 13bd945d47e4d22a600cb5918d8ea600269311a0ed5d84409015d1ac460cef46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heyrovsky, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 13:40:04 np0005533938 podman[240030]: 2025-11-24 18:40:04.27149999 +0000 UTC m=+0.593853769 container start 13bd945d47e4d22a600cb5918d8ea600269311a0ed5d84409015d1ac460cef46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heyrovsky, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:40:04 np0005533938 dazzling_heyrovsky[240046]: 167 167
Nov 24 13:40:04 np0005533938 systemd[1]: libpod-13bd945d47e4d22a600cb5918d8ea600269311a0ed5d84409015d1ac460cef46.scope: Deactivated successfully.
Nov 24 13:40:04 np0005533938 podman[240030]: 2025-11-24 18:40:04.394511266 +0000 UTC m=+0.716865035 container attach 13bd945d47e4d22a600cb5918d8ea600269311a0ed5d84409015d1ac460cef46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heyrovsky, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:40:04 np0005533938 podman[240030]: 2025-11-24 18:40:04.395455769 +0000 UTC m=+0.717809518 container died 13bd945d47e4d22a600cb5918d8ea600269311a0ed5d84409015d1ac460cef46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heyrovsky, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:40:04 np0005533938 systemd[1]: var-lib-containers-storage-overlay-b189fd9bb3ee01b4e05bdf49ccfd62158243bc013ef9abc85b6cc4aa9b21f306-merged.mount: Deactivated successfully.
Nov 24 13:40:04 np0005533938 podman[240030]: 2025-11-24 18:40:04.541013664 +0000 UTC m=+0.863367413 container remove 13bd945d47e4d22a600cb5918d8ea600269311a0ed5d84409015d1ac460cef46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 13:40:04 np0005533938 systemd[1]: libpod-conmon-13bd945d47e4d22a600cb5918d8ea600269311a0ed5d84409015d1ac460cef46.scope: Deactivated successfully.
Nov 24 13:40:04 np0005533938 podman[240070]: 2025-11-24 18:40:04.694290671 +0000 UTC m=+0.042996106 container create 14398979fa2f69be7c5c35715124d9286b1564bdaaf6ed2f0f8b0ca827940bd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:40:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:40:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:40:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:40:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:40:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:40:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:40:04 np0005533938 systemd[1]: Started libpod-conmon-14398979fa2f69be7c5c35715124d9286b1564bdaaf6ed2f0f8b0ca827940bd1.scope.
Nov 24 13:40:04 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:40:04 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0209438eb7bda5bdc2b24957d2ca9d53381aeaa01063e22f5570b33db6575d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:40:04 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0209438eb7bda5bdc2b24957d2ca9d53381aeaa01063e22f5570b33db6575d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:40:04 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0209438eb7bda5bdc2b24957d2ca9d53381aeaa01063e22f5570b33db6575d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:40:04 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0209438eb7bda5bdc2b24957d2ca9d53381aeaa01063e22f5570b33db6575d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:40:04 np0005533938 podman[240070]: 2025-11-24 18:40:04.673859065 +0000 UTC m=+0.022564530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:40:04 np0005533938 podman[240070]: 2025-11-24 18:40:04.778422334 +0000 UTC m=+0.127127789 container init 14398979fa2f69be7c5c35715124d9286b1564bdaaf6ed2f0f8b0ca827940bd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_feistel, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:40:04 np0005533938 podman[240070]: 2025-11-24 18:40:04.78550952 +0000 UTC m=+0.134214955 container start 14398979fa2f69be7c5c35715124d9286b1564bdaaf6ed2f0f8b0ca827940bd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:40:04 np0005533938 podman[240070]: 2025-11-24 18:40:04.790046682 +0000 UTC m=+0.138752147 container attach 14398979fa2f69be7c5c35715124d9286b1564bdaaf6ed2f0f8b0ca827940bd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:40:05 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]: {
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:    "0": [
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:        {
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "devices": [
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "/dev/loop3"
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            ],
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "lv_name": "ceph_lv0",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "lv_size": "21470642176",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "name": "ceph_lv0",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "tags": {
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.cluster_name": "ceph",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.crush_device_class": "",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.encrypted": "0",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.osd_id": "0",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.type": "block",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.vdo": "0"
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            },
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "type": "block",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "vg_name": "ceph_vg0"
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:        }
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:    ],
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:    "1": [
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:        {
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "devices": [
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "/dev/loop4"
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            ],
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "lv_name": "ceph_lv1",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "lv_size": "21470642176",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "name": "ceph_lv1",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "tags": {
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.cluster_name": "ceph",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.crush_device_class": "",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.encrypted": "0",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.osd_id": "1",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.type": "block",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.vdo": "0"
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            },
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "type": "block",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "vg_name": "ceph_vg1"
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:        }
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:    ],
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:    "2": [
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:        {
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "devices": [
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "/dev/loop5"
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            ],
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "lv_name": "ceph_lv2",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "lv_size": "21470642176",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "name": "ceph_lv2",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "tags": {
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.cluster_name": "ceph",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.crush_device_class": "",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.encrypted": "0",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.osd_id": "2",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.type": "block",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:                "ceph.vdo": "0"
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            },
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "type": "block",
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:            "vg_name": "ceph_vg2"
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:        }
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]:    ]
Nov 24 13:40:05 np0005533938 mystifying_feistel[240086]: }
Nov 24 13:40:05 np0005533938 systemd[1]: libpod-14398979fa2f69be7c5c35715124d9286b1564bdaaf6ed2f0f8b0ca827940bd1.scope: Deactivated successfully.
Nov 24 13:40:05 np0005533938 conmon[240086]: conmon 14398979fa2f69be7c5c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-14398979fa2f69be7c5c35715124d9286b1564bdaaf6ed2f0f8b0ca827940bd1.scope/container/memory.events
Nov 24 13:40:05 np0005533938 podman[240070]: 2025-11-24 18:40:05.567197209 +0000 UTC m=+0.915902644 container died 14398979fa2f69be7c5c35715124d9286b1564bdaaf6ed2f0f8b0ca827940bd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 13:40:05 np0005533938 systemd[1]: var-lib-containers-storage-overlay-ca0209438eb7bda5bdc2b24957d2ca9d53381aeaa01063e22f5570b33db6575d-merged.mount: Deactivated successfully.
Nov 24 13:40:05 np0005533938 podman[240070]: 2025-11-24 18:40:05.628845555 +0000 UTC m=+0.977550990 container remove 14398979fa2f69be7c5c35715124d9286b1564bdaaf6ed2f0f8b0ca827940bd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_feistel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:40:05 np0005533938 systemd[1]: libpod-conmon-14398979fa2f69be7c5c35715124d9286b1564bdaaf6ed2f0f8b0ca827940bd1.scope: Deactivated successfully.
Nov 24 13:40:06 np0005533938 podman[240246]: 2025-11-24 18:40:06.261985176 +0000 UTC m=+0.055240149 container create 465d93d32715f3d949ebb89dbd23170baf5936611f6c48c89af14b4ecfadd256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:40:06 np0005533938 systemd[1]: Started libpod-conmon-465d93d32715f3d949ebb89dbd23170baf5936611f6c48c89af14b4ecfadd256.scope.
Nov 24 13:40:06 np0005533938 podman[240246]: 2025-11-24 18:40:06.234772292 +0000 UTC m=+0.028027355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:40:06 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:40:06 np0005533938 podman[240246]: 2025-11-24 18:40:06.35742929 +0000 UTC m=+0.150684283 container init 465d93d32715f3d949ebb89dbd23170baf5936611f6c48c89af14b4ecfadd256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hopper, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 13:40:06 np0005533938 podman[240246]: 2025-11-24 18:40:06.364527866 +0000 UTC m=+0.157782839 container start 465d93d32715f3d949ebb89dbd23170baf5936611f6c48c89af14b4ecfadd256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hopper, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:40:06 np0005533938 podman[240246]: 2025-11-24 18:40:06.367680404 +0000 UTC m=+0.160935407 container attach 465d93d32715f3d949ebb89dbd23170baf5936611f6c48c89af14b4ecfadd256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 13:40:06 np0005533938 sleepy_hopper[240263]: 167 167
Nov 24 13:40:06 np0005533938 systemd[1]: libpod-465d93d32715f3d949ebb89dbd23170baf5936611f6c48c89af14b4ecfadd256.scope: Deactivated successfully.
Nov 24 13:40:06 np0005533938 podman[240246]: 2025-11-24 18:40:06.371506279 +0000 UTC m=+0.164761272 container died 465d93d32715f3d949ebb89dbd23170baf5936611f6c48c89af14b4ecfadd256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:40:06 np0005533938 systemd[1]: var-lib-containers-storage-overlay-5fc3b6702da7f8cdb558a07fab46ec086e60cc4625d538f6fea3e850824d4216-merged.mount: Deactivated successfully.
Nov 24 13:40:06 np0005533938 podman[240246]: 2025-11-24 18:40:06.409258284 +0000 UTC m=+0.202513257 container remove 465d93d32715f3d949ebb89dbd23170baf5936611f6c48c89af14b4ecfadd256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:40:06 np0005533938 systemd[1]: libpod-conmon-465d93d32715f3d949ebb89dbd23170baf5936611f6c48c89af14b4ecfadd256.scope: Deactivated successfully.
Nov 24 13:40:06 np0005533938 podman[240288]: 2025-11-24 18:40:06.578536596 +0000 UTC m=+0.043964140 container create 244380a20176ca94e3da1d51b504a2f30c89326d76f865e40563345cb8178fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rhodes, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 13:40:06 np0005533938 systemd[1]: Started libpod-conmon-244380a20176ca94e3da1d51b504a2f30c89326d76f865e40563345cb8178fda.scope.
Nov 24 13:40:06 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:40:06 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64944f7e89ddc3ac30e497c04f0206d45ab8314b07757b7c2f3ae91821f413ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:40:06 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64944f7e89ddc3ac30e497c04f0206d45ab8314b07757b7c2f3ae91821f413ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:40:06 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64944f7e89ddc3ac30e497c04f0206d45ab8314b07757b7c2f3ae91821f413ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:40:06 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64944f7e89ddc3ac30e497c04f0206d45ab8314b07757b7c2f3ae91821f413ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:40:06 np0005533938 podman[240288]: 2025-11-24 18:40:06.651686478 +0000 UTC m=+0.117114042 container init 244380a20176ca94e3da1d51b504a2f30c89326d76f865e40563345cb8178fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:40:06 np0005533938 podman[240288]: 2025-11-24 18:40:06.561256568 +0000 UTC m=+0.026684132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:40:06 np0005533938 podman[240288]: 2025-11-24 18:40:06.663394358 +0000 UTC m=+0.128821902 container start 244380a20176ca94e3da1d51b504a2f30c89326d76f865e40563345cb8178fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:40:06 np0005533938 podman[240288]: 2025-11-24 18:40:06.666661239 +0000 UTC m=+0.132088833 container attach 244380a20176ca94e3da1d51b504a2f30c89326d76f865e40563345cb8178fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 13:40:07 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:07 np0005533938 angry_rhodes[240305]: {
Nov 24 13:40:07 np0005533938 angry_rhodes[240305]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:40:07 np0005533938 angry_rhodes[240305]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:40:07 np0005533938 angry_rhodes[240305]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:40:07 np0005533938 angry_rhodes[240305]:        "osd_id": 0,
Nov 24 13:40:07 np0005533938 angry_rhodes[240305]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:40:07 np0005533938 angry_rhodes[240305]:        "type": "bluestore"
Nov 24 13:40:07 np0005533938 angry_rhodes[240305]:    },
Nov 24 13:40:07 np0005533938 angry_rhodes[240305]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:40:07 np0005533938 angry_rhodes[240305]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:40:07 np0005533938 angry_rhodes[240305]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:40:07 np0005533938 angry_rhodes[240305]:        "osd_id": 1,
Nov 24 13:40:07 np0005533938 angry_rhodes[240305]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:40:07 np0005533938 angry_rhodes[240305]:        "type": "bluestore"
Nov 24 13:40:07 np0005533938 angry_rhodes[240305]:    },
Nov 24 13:40:07 np0005533938 angry_rhodes[240305]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:40:07 np0005533938 angry_rhodes[240305]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:40:07 np0005533938 angry_rhodes[240305]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:40:07 np0005533938 angry_rhodes[240305]:        "osd_id": 2,
Nov 24 13:40:07 np0005533938 angry_rhodes[240305]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:40:07 np0005533938 angry_rhodes[240305]:        "type": "bluestore"
Nov 24 13:40:07 np0005533938 angry_rhodes[240305]:    }
Nov 24 13:40:07 np0005533938 angry_rhodes[240305]: }
Nov 24 13:40:07 np0005533938 systemd[1]: libpod-244380a20176ca94e3da1d51b504a2f30c89326d76f865e40563345cb8178fda.scope: Deactivated successfully.
Nov 24 13:40:07 np0005533938 systemd[1]: libpod-244380a20176ca94e3da1d51b504a2f30c89326d76f865e40563345cb8178fda.scope: Consumed 1.022s CPU time.
Nov 24 13:40:07 np0005533938 podman[240288]: 2025-11-24 18:40:07.682025086 +0000 UTC m=+1.147452670 container died 244380a20176ca94e3da1d51b504a2f30c89326d76f865e40563345cb8178fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:40:07 np0005533938 systemd[1]: var-lib-containers-storage-overlay-64944f7e89ddc3ac30e497c04f0206d45ab8314b07757b7c2f3ae91821f413ac-merged.mount: Deactivated successfully.
Nov 24 13:40:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:40:07 np0005533938 podman[240288]: 2025-11-24 18:40:07.745337444 +0000 UTC m=+1.210764998 container remove 244380a20176ca94e3da1d51b504a2f30c89326d76f865e40563345cb8178fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rhodes, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 13:40:07 np0005533938 systemd[1]: libpod-conmon-244380a20176ca94e3da1d51b504a2f30c89326d76f865e40563345cb8178fda.scope: Deactivated successfully.
Nov 24 13:40:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:40:07 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:40:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:40:07 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:40:07 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev f2a13081-88e1-4ce6-823f-baba801d5119 does not exist
Nov 24 13:40:07 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev c6fe55c8-9ddd-4685-93ac-f0de07ed5357 does not exist
Nov 24 13:40:08 np0005533938 systemd-logind[822]: New session 52 of user zuul.
Nov 24 13:40:08 np0005533938 systemd[1]: Started Session 52 of User zuul.
Nov 24 13:40:08 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:40:08 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:40:09 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:09 np0005533938 python3.9[240555]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:40:10 np0005533938 python3.9[240709]: ansible-ansible.builtin.service_facts Invoked
Nov 24 13:40:10 np0005533938 network[240726]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 13:40:10 np0005533938 network[240727]: 'network-scripts' will be removed from distribution in near future.
Nov 24 13:40:10 np0005533938 network[240728]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 13:40:11 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:40:13 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:15 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:15 np0005533938 python3.9[241000]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 24 13:40:16 np0005533938 python3.9[241084]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:40:17 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:40:19 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:21 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:22 np0005533938 podman[241086]: 2025-11-24 18:40:22.054834169 +0000 UTC m=+0.140203434 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 24 13:40:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:40:22.732 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:40:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:40:22.732 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:40:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:40:22.732 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:40:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:40:23 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:23 np0005533938 python3.9[241263]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:40:24 np0005533938 podman[241415]: 2025-11-24 18:40:24.516737791 +0000 UTC m=+0.110803176 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 24 13:40:24 np0005533938 python3.9[241416]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:40:25 np0005533938 python3.9[241587]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:40:25 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:26 np0005533938 python3.9[241739]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:40:26 np0005533938 python3.9[241892]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:40:27 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:27 np0005533938 python3.9[242015]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009626.2239416-95-57991512509741/.source.iscsi _original_basename=.8itx7q88 follow=False checksum=dd0d0d208ed07e6a1ac6c580acc057f9dc4e2fc0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:40:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:40:28 np0005533938 python3.9[242167]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:40:29 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:29 np0005533938 python3.9[242319]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:40:31 np0005533938 python3.9[242471]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:40:31 np0005533938 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 24 13:40:31 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:31 np0005533938 python3.9[242627]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:40:32 np0005533938 systemd[1]: Reloading.
Nov 24 13:40:32 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:40:32 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:40:32 np0005533938 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 24 13:40:32 np0005533938 systemd[1]: Starting Open-iSCSI...
Nov 24 13:40:32 np0005533938 kernel: Loading iSCSI transport class v2.0-870.
Nov 24 13:40:32 np0005533938 systemd[1]: Started Open-iSCSI.
Nov 24 13:40:32 np0005533938 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 24 13:40:32 np0005533938 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 24 13:40:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:40:33 np0005533938 python3.9[242829]: ansible-ansible.builtin.service_facts Invoked
Nov 24 13:40:33 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:33 np0005533938 network[242846]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 13:40:33 np0005533938 network[242847]: 'network-scripts' will be removed from distribution in near future.
Nov 24 13:40:33 np0005533938 network[242848]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 13:40:33 np0005533938 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:40:33 np0005533938 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.2 total, 600.0 interval#012Cumulative writes: 5582 writes, 23K keys, 5582 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5582 writes, 857 syncs, 6.51 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ab251ff1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 24 13:40:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:40:34
Nov 24 13:40:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:40:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:40:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['vms', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', '.rgw.root', 'default.rgw.log', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes']
Nov 24 13:40:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:40:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:40:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:40:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:40:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:40:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:40:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:40:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:40:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:40:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:40:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:40:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:40:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:40:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:40:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:40:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:40:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:40:35 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:37 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:40:38 np0005533938 python3.9[243120]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 13:40:38 np0005533938 python3.9[243272]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 24 13:40:39 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:39 np0005533938 python3.9[243428]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:40:40 np0005533938 python3.9[243551]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009639.1811097-172-248901092692047/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:40:40 np0005533938 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:40:40 np0005533938 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1201.0 total, 600.0 interval#012Cumulative writes: 6685 writes, 27K keys, 6685 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6685 writes, 1209 syncs, 5.53 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 24 13:40:40 np0005533938 python3.9[243703]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:40:41 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:41 np0005533938 python3.9[243855]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 13:40:42 np0005533938 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 24 13:40:42 np0005533938 systemd[1]: Stopped Load Kernel Modules.
Nov 24 13:40:42 np0005533938 systemd[1]: Stopping Load Kernel Modules...
Nov 24 13:40:42 np0005533938 systemd[1]: Starting Load Kernel Modules...
Nov 24 13:40:42 np0005533938 systemd[1]: Finished Load Kernel Modules.
Nov 24 13:40:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:40:42 np0005533938 python3.9[244012]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:40:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:40:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:40:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:40:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:40:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:40:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:40:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:40:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:40:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:40:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:40:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:40:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:40:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:40:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:40:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:40:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:40:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:40:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:40:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:40:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:40:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:40:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:40:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:40:43 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:43 np0005533938 python3.9[244164]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:40:44 np0005533938 python3.9[244316]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:40:45 np0005533938 python3.9[244468]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:40:45 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:45 np0005533938 python3.9[244591]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009644.7401109-230-95481122729426/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:40:46 np0005533938 python3.9[244743]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:40:47 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:47 np0005533938 python3.9[244896]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:40:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:40:47 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:40:47 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.2 total, 600.0 interval#012Cumulative writes: 5662 writes, 23K keys, 5662 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5662 writes, 859 syncs, 6.59 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Nov 24 13:40:48 np0005533938 python3.9[245048]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:40:49 np0005533938 python3.9[245200]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:40:49 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:49 np0005533938 python3.9[245352]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:40:50 np0005533938 ceph-mgr[75218]: [devicehealth INFO root] Check health
Nov 24 13:40:50 np0005533938 python3.9[245504]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:40:51 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:51 np0005533938 python3.9[245656]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:40:52 np0005533938 python3.9[245808]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:40:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:40:52 np0005533938 podman[245932]: 2025-11-24 18:40:52.900721118 +0000 UTC m=+0.140406838 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 13:40:53 np0005533938 python3.9[245970]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:40:53 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:53 np0005533938 python3.9[246137]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:40:54 np0005533938 python3.9[246289]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:40:54 np0005533938 podman[246366]: 2025-11-24 18:40:54.959708261 +0000 UTC m=+0.048690955 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 13:40:55 np0005533938 python3.9[246457]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:40:55 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:55 np0005533938 python3.9[246535]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:40:56 np0005533938 python3.9[246687]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:40:56 np0005533938 python3.9[246765]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:40:57 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:57 np0005533938 python3.9[246917]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:40:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:40:58 np0005533938 python3.9[247069]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:40:58 np0005533938 python3.9[247147]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:40:59 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:40:59 np0005533938 python3.9[247299]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:41:00 np0005533938 python3.9[247377]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:41:01 np0005533938 python3.9[247529]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:41:01 np0005533938 systemd[1]: Reloading.
Nov 24 13:41:01 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:41:01 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:01 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:41:02 np0005533938 python3.9[247718]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:41:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:41:03 np0005533938 python3.9[247796]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:41:03 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:03 np0005533938 python3.9[247948]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:41:04 np0005533938 python3.9[248026]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:41:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:41:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:41:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:41:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:41:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:41:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:41:05 np0005533938 python3.9[248178]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:41:05 np0005533938 systemd[1]: Reloading.
Nov 24 13:41:05 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:41:05 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:41:05 np0005533938 systemd[1]: Starting Create netns directory...
Nov 24 13:41:05 np0005533938 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 24 13:41:05 np0005533938 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 24 13:41:05 np0005533938 systemd[1]: Finished Create netns directory.
Nov 24 13:41:05 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:06 np0005533938 python3.9[248371]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:41:06 np0005533938 python3.9[248523]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:41:07 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:07 np0005533938 python3.9[248646]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009666.3559847-437-234106034483248/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:41:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:41:08 np0005533938 python3.9[248871]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.425206) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009668425284, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1573, "num_deletes": 251, "total_data_size": 2606937, "memory_usage": 2647472, "flush_reason": "Manual Compaction"}
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009668442736, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2561561, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14760, "largest_seqno": 16332, "table_properties": {"data_size": 2554181, "index_size": 4387, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14574, "raw_average_key_size": 19, "raw_value_size": 2539603, "raw_average_value_size": 3418, "num_data_blocks": 201, "num_entries": 743, "num_filter_entries": 743, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764009493, "oldest_key_time": 1764009493, "file_creation_time": 1764009668, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 17565 microseconds, and 10523 cpu microseconds.
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.442787) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2561561 bytes OK
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.442808) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.444118) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.444135) EVENT_LOG_v1 {"time_micros": 1764009668444129, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.444153) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2600139, prev total WAL file size 2600139, number of live WAL files 2.
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.445134) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2501KB)], [35(6733KB)]
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009668445178, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9456532, "oldest_snapshot_seqno": -1}
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4000 keys, 7693577 bytes, temperature: kUnknown
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009668485548, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7693577, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7664683, "index_size": 17776, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 97757, "raw_average_key_size": 24, "raw_value_size": 7590113, "raw_average_value_size": 1897, "num_data_blocks": 753, "num_entries": 4000, "num_filter_entries": 4000, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764009668, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.485761) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7693577 bytes
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.487588) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 233.9 rd, 190.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 6.6 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(6.7) write-amplify(3.0) OK, records in: 4514, records dropped: 514 output_compression: NoCompression
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.487603) EVENT_LOG_v1 {"time_micros": 1764009668487595, "job": 16, "event": "compaction_finished", "compaction_time_micros": 40436, "compaction_time_cpu_micros": 15433, "output_level": 6, "num_output_files": 1, "total_output_size": 7693577, "num_input_records": 4514, "num_output_records": 4000, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009668488093, "job": 16, "event": "table_file_deletion", "file_number": 37}
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009668489090, "job": 16, "event": "table_file_deletion", "file_number": 35}
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.445049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.489164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.489169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.489171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.489173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:41:08.489175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:41:08 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev b42278e5-2095-4e17-812a-4b5d103f408d does not exist
Nov 24 13:41:08 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 04508045-5407-46f3-b7b1-9e2ae4e1570e does not exist
Nov 24 13:41:08 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 6054c3f0-b48c-46ce-928e-40153215bb2c does not exist
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:41:08 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:41:09 np0005533938 python3.9[249134]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:41:09 np0005533938 podman[249281]: 2025-11-24 18:41:09.353116823 +0000 UTC m=+0.045671112 container create c5e1551bafa4fb441e42fa0498b39e68d481ad0deae6cd8035178447f00bde67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_haslett, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:41:09 np0005533938 systemd[1]: Started libpod-conmon-c5e1551bafa4fb441e42fa0498b39e68d481ad0deae6cd8035178447f00bde67.scope.
Nov 24 13:41:09 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:09 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:41:09 np0005533938 podman[249281]: 2025-11-24 18:41:09.334379429 +0000 UTC m=+0.026933768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:41:09 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 13:41:09 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:41:09 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:41:09 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:41:09 np0005533938 podman[249281]: 2025-11-24 18:41:09.44020663 +0000 UTC m=+0.132760969 container init c5e1551bafa4fb441e42fa0498b39e68d481ad0deae6cd8035178447f00bde67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:41:09 np0005533938 podman[249281]: 2025-11-24 18:41:09.446578518 +0000 UTC m=+0.139132817 container start c5e1551bafa4fb441e42fa0498b39e68d481ad0deae6cd8035178447f00bde67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_haslett, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 13:41:09 np0005533938 podman[249281]: 2025-11-24 18:41:09.449699555 +0000 UTC m=+0.142253854 container attach c5e1551bafa4fb441e42fa0498b39e68d481ad0deae6cd8035178447f00bde67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_haslett, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:41:09 np0005533938 eloquent_haslett[249328]: 167 167
Nov 24 13:41:09 np0005533938 systemd[1]: libpod-c5e1551bafa4fb441e42fa0498b39e68d481ad0deae6cd8035178447f00bde67.scope: Deactivated successfully.
Nov 24 13:41:09 np0005533938 podman[249281]: 2025-11-24 18:41:09.453863708 +0000 UTC m=+0.146418057 container died c5e1551bafa4fb441e42fa0498b39e68d481ad0deae6cd8035178447f00bde67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_haslett, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 13:41:09 np0005533938 systemd[1]: var-lib-containers-storage-overlay-5d0ff6b40b8ebc2e3a9debbd5fa4672e1c7b7a14d818ebc82c4ff9696fa90faf-merged.mount: Deactivated successfully.
Nov 24 13:41:09 np0005533938 podman[249281]: 2025-11-24 18:41:09.496595066 +0000 UTC m=+0.189149355 container remove c5e1551bafa4fb441e42fa0498b39e68d481ad0deae6cd8035178447f00bde67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 13:41:09 np0005533938 systemd[1]: libpod-conmon-c5e1551bafa4fb441e42fa0498b39e68d481ad0deae6cd8035178447f00bde67.scope: Deactivated successfully.
Nov 24 13:41:09 np0005533938 podman[249383]: 2025-11-24 18:41:09.661427739 +0000 UTC m=+0.043645332 container create d9f5faadfbc74fabee6b0d1e1e5451760509be7ff2de3279d1d2a2cba1ee362c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bouman, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:41:09 np0005533938 systemd[1]: Started libpod-conmon-d9f5faadfbc74fabee6b0d1e1e5451760509be7ff2de3279d1d2a2cba1ee362c.scope.
Nov 24 13:41:09 np0005533938 python3.9[249377]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009668.5893915-462-241614243729571/.source.json _original_basename=.cnpgpgjt follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:41:09 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:41:09 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87554e8238f355feb97101aa59f69cd5731889f9876a905b4da660d1f22b47d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:41:09 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87554e8238f355feb97101aa59f69cd5731889f9876a905b4da660d1f22b47d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:41:09 np0005533938 podman[249383]: 2025-11-24 18:41:09.645080704 +0000 UTC m=+0.027298307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:41:09 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87554e8238f355feb97101aa59f69cd5731889f9876a905b4da660d1f22b47d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:41:09 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87554e8238f355feb97101aa59f69cd5731889f9876a905b4da660d1f22b47d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:41:09 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87554e8238f355feb97101aa59f69cd5731889f9876a905b4da660d1f22b47d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:41:09 np0005533938 podman[249383]: 2025-11-24 18:41:09.754528504 +0000 UTC m=+0.136746117 container init d9f5faadfbc74fabee6b0d1e1e5451760509be7ff2de3279d1d2a2cba1ee362c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:41:09 np0005533938 podman[249383]: 2025-11-24 18:41:09.764063791 +0000 UTC m=+0.146281384 container start d9f5faadfbc74fabee6b0d1e1e5451760509be7ff2de3279d1d2a2cba1ee362c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 13:41:09 np0005533938 podman[249383]: 2025-11-24 18:41:09.76807326 +0000 UTC m=+0.150290853 container attach d9f5faadfbc74fabee6b0d1e1e5451760509be7ff2de3279d1d2a2cba1ee362c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 13:41:10 np0005533938 python3.9[249554]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:41:10 np0005533938 cool_bouman[249398]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:41:10 np0005533938 cool_bouman[249398]: --> relative data size: 1.0
Nov 24 13:41:10 np0005533938 cool_bouman[249398]: --> All data devices are unavailable
Nov 24 13:41:10 np0005533938 systemd[1]: libpod-d9f5faadfbc74fabee6b0d1e1e5451760509be7ff2de3279d1d2a2cba1ee362c.scope: Deactivated successfully.
Nov 24 13:41:10 np0005533938 podman[249383]: 2025-11-24 18:41:10.745794795 +0000 UTC m=+1.128012408 container died d9f5faadfbc74fabee6b0d1e1e5451760509be7ff2de3279d1d2a2cba1ee362c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bouman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 24 13:41:10 np0005533938 systemd[1]: var-lib-containers-storage-overlay-a87554e8238f355feb97101aa59f69cd5731889f9876a905b4da660d1f22b47d-merged.mount: Deactivated successfully.
Nov 24 13:41:10 np0005533938 podman[249383]: 2025-11-24 18:41:10.796554412 +0000 UTC m=+1.178772005 container remove d9f5faadfbc74fabee6b0d1e1e5451760509be7ff2de3279d1d2a2cba1ee362c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bouman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:41:10 np0005533938 systemd[1]: libpod-conmon-d9f5faadfbc74fabee6b0d1e1e5451760509be7ff2de3279d1d2a2cba1ee362c.scope: Deactivated successfully.
Nov 24 13:41:11 np0005533938 podman[249952]: 2025-11-24 18:41:11.312171162 +0000 UTC m=+0.038600567 container create 7cb0722ca40b1f110773e2d4ef594ae7de488db75e7ee51c0d481dfdf45434cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Nov 24 13:41:11 np0005533938 systemd[1]: Started libpod-conmon-7cb0722ca40b1f110773e2d4ef594ae7de488db75e7ee51c0d481dfdf45434cd.scope.
Nov 24 13:41:11 np0005533938 podman[249952]: 2025-11-24 18:41:11.295669363 +0000 UTC m=+0.022098838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:41:11 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:41:11 np0005533938 podman[249952]: 2025-11-24 18:41:11.4146561 +0000 UTC m=+0.141085515 container init 7cb0722ca40b1f110773e2d4ef594ae7de488db75e7ee51c0d481dfdf45434cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lovelace, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 13:41:11 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:11 np0005533938 podman[249952]: 2025-11-24 18:41:11.420999787 +0000 UTC m=+0.147429182 container start 7cb0722ca40b1f110773e2d4ef594ae7de488db75e7ee51c0d481dfdf45434cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lovelace, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 13:41:11 np0005533938 podman[249952]: 2025-11-24 18:41:11.423969291 +0000 UTC m=+0.150398706 container attach 7cb0722ca40b1f110773e2d4ef594ae7de488db75e7ee51c0d481dfdf45434cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 13:41:11 np0005533938 adoring_lovelace[250020]: 167 167
Nov 24 13:41:11 np0005533938 systemd[1]: libpod-7cb0722ca40b1f110773e2d4ef594ae7de488db75e7ee51c0d481dfdf45434cd.scope: Deactivated successfully.
Nov 24 13:41:11 np0005533938 conmon[250020]: conmon 7cb0722ca40b1f110773 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7cb0722ca40b1f110773e2d4ef594ae7de488db75e7ee51c0d481dfdf45434cd.scope/container/memory.events
Nov 24 13:41:11 np0005533938 podman[249952]: 2025-11-24 18:41:11.425983951 +0000 UTC m=+0.152413366 container died 7cb0722ca40b1f110773e2d4ef594ae7de488db75e7ee51c0d481dfdf45434cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lovelace, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 24 13:41:11 np0005533938 systemd[1]: var-lib-containers-storage-overlay-7bcb754fd75c70343c4a264b3308ce7a62e18f0d9294450041fb39a370e1dd42-merged.mount: Deactivated successfully.
Nov 24 13:41:11 np0005533938 podman[249952]: 2025-11-24 18:41:11.458317022 +0000 UTC m=+0.184746417 container remove 7cb0722ca40b1f110773e2d4ef594ae7de488db75e7ee51c0d481dfdf45434cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lovelace, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:41:11 np0005533938 systemd[1]: libpod-conmon-7cb0722ca40b1f110773e2d4ef594ae7de488db75e7ee51c0d481dfdf45434cd.scope: Deactivated successfully.
Nov 24 13:41:11 np0005533938 podman[250043]: 2025-11-24 18:41:11.609848145 +0000 UTC m=+0.040456743 container create 681088b41182cd2cfdb95f7050f17836f7a3e29b85bfc40a1e32ae024037bb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 13:41:11 np0005533938 systemd[1]: Started libpod-conmon-681088b41182cd2cfdb95f7050f17836f7a3e29b85bfc40a1e32ae024037bb85.scope.
Nov 24 13:41:11 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:41:11 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc212d3a948f8e5a9d041a394b8eb085c523c82902292b029f28c5cc0c4d96ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:41:11 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc212d3a948f8e5a9d041a394b8eb085c523c82902292b029f28c5cc0c4d96ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:41:11 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc212d3a948f8e5a9d041a394b8eb085c523c82902292b029f28c5cc0c4d96ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:41:11 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc212d3a948f8e5a9d041a394b8eb085c523c82902292b029f28c5cc0c4d96ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:41:11 np0005533938 podman[250043]: 2025-11-24 18:41:11.668332233 +0000 UTC m=+0.098940841 container init 681088b41182cd2cfdb95f7050f17836f7a3e29b85bfc40a1e32ae024037bb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_montalcini, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 24 13:41:11 np0005533938 podman[250043]: 2025-11-24 18:41:11.675031899 +0000 UTC m=+0.105640487 container start 681088b41182cd2cfdb95f7050f17836f7a3e29b85bfc40a1e32ae024037bb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_montalcini, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 13:41:11 np0005533938 podman[250043]: 2025-11-24 18:41:11.678412563 +0000 UTC m=+0.109021171 container attach 681088b41182cd2cfdb95f7050f17836f7a3e29b85bfc40a1e32ae024037bb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_montalcini, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 13:41:11 np0005533938 podman[250043]: 2025-11-24 18:41:11.591801268 +0000 UTC m=+0.022409876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]: {
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:    "0": [
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:        {
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "devices": [
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "/dev/loop3"
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            ],
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "lv_name": "ceph_lv0",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "lv_size": "21470642176",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "name": "ceph_lv0",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "tags": {
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.cluster_name": "ceph",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.crush_device_class": "",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.encrypted": "0",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.osd_id": "0",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.type": "block",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.vdo": "0"
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            },
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "type": "block",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "vg_name": "ceph_vg0"
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:        }
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:    ],
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:    "1": [
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:        {
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "devices": [
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "/dev/loop4"
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            ],
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "lv_name": "ceph_lv1",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "lv_size": "21470642176",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "name": "ceph_lv1",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "tags": {
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.cluster_name": "ceph",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.crush_device_class": "",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.encrypted": "0",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.osd_id": "1",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.type": "block",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.vdo": "0"
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            },
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "type": "block",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "vg_name": "ceph_vg1"
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:        }
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:    ],
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:    "2": [
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:        {
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "devices": [
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "/dev/loop5"
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            ],
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "lv_name": "ceph_lv2",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "lv_size": "21470642176",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "name": "ceph_lv2",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "tags": {
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.cluster_name": "ceph",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.crush_device_class": "",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.encrypted": "0",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.osd_id": "2",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.type": "block",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:                "ceph.vdo": "0"
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            },
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "type": "block",
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:            "vg_name": "ceph_vg2"
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:        }
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]:    ]
Nov 24 13:41:12 np0005533938 vigilant_montalcini[250083]: }
Nov 24 13:41:12 np0005533938 python3.9[250215]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 24 13:41:12 np0005533938 systemd[1]: libpod-681088b41182cd2cfdb95f7050f17836f7a3e29b85bfc40a1e32ae024037bb85.scope: Deactivated successfully.
Nov 24 13:41:12 np0005533938 podman[250043]: 2025-11-24 18:41:12.406173117 +0000 UTC m=+0.836781705 container died 681088b41182cd2cfdb95f7050f17836f7a3e29b85bfc40a1e32ae024037bb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_montalcini, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 13:41:12 np0005533938 systemd[1]: var-lib-containers-storage-overlay-bc212d3a948f8e5a9d041a394b8eb085c523c82902292b029f28c5cc0c4d96ba-merged.mount: Deactivated successfully.
Nov 24 13:41:12 np0005533938 podman[250043]: 2025-11-24 18:41:12.4575838 +0000 UTC m=+0.888192378 container remove 681088b41182cd2cfdb95f7050f17836f7a3e29b85bfc40a1e32ae024037bb85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_montalcini, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:41:12 np0005533938 systemd[1]: libpod-conmon-681088b41182cd2cfdb95f7050f17836f7a3e29b85bfc40a1e32ae024037bb85.scope: Deactivated successfully.
Nov 24 13:41:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:41:12 np0005533938 podman[250447]: 2025-11-24 18:41:12.994947478 +0000 UTC m=+0.040344670 container create b3361314280a0151ba558271a5a15287b6ad829663d49a41867f2bf7340b425b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 13:41:13 np0005533938 systemd[1]: Started libpod-conmon-b3361314280a0151ba558271a5a15287b6ad829663d49a41867f2bf7340b425b.scope.
Nov 24 13:41:13 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:41:13 np0005533938 podman[250447]: 2025-11-24 18:41:13.068698055 +0000 UTC m=+0.114095267 container init b3361314280a0151ba558271a5a15287b6ad829663d49a41867f2bf7340b425b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:41:13 np0005533938 podman[250447]: 2025-11-24 18:41:13.074762415 +0000 UTC m=+0.120159607 container start b3361314280a0151ba558271a5a15287b6ad829663d49a41867f2bf7340b425b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 13:41:13 np0005533938 podman[250447]: 2025-11-24 18:41:12.98010281 +0000 UTC m=+0.025500022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:41:13 np0005533938 gracious_mahavira[250480]: 167 167
Nov 24 13:41:13 np0005533938 podman[250447]: 2025-11-24 18:41:13.078120298 +0000 UTC m=+0.123517490 container attach b3361314280a0151ba558271a5a15287b6ad829663d49a41867f2bf7340b425b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mahavira, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 13:41:13 np0005533938 systemd[1]: libpod-b3361314280a0151ba558271a5a15287b6ad829663d49a41867f2bf7340b425b.scope: Deactivated successfully.
Nov 24 13:41:13 np0005533938 podman[250447]: 2025-11-24 18:41:13.078661711 +0000 UTC m=+0.124058903 container died b3361314280a0151ba558271a5a15287b6ad829663d49a41867f2bf7340b425b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:41:13 np0005533938 systemd[1]: var-lib-containers-storage-overlay-b0b026f93abec062ffd88fd744882dc80f7a035f6ebedb975394dae88146d016-merged.mount: Deactivated successfully.
Nov 24 13:41:13 np0005533938 podman[250447]: 2025-11-24 18:41:13.112316465 +0000 UTC m=+0.157713657 container remove b3361314280a0151ba558271a5a15287b6ad829663d49a41867f2bf7340b425b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mahavira, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:41:13 np0005533938 systemd[1]: libpod-conmon-b3361314280a0151ba558271a5a15287b6ad829663d49a41867f2bf7340b425b.scope: Deactivated successfully.
Nov 24 13:41:13 np0005533938 podman[250562]: 2025-11-24 18:41:13.267223981 +0000 UTC m=+0.043808736 container create d3a6e5bd6e49e5b16986ced264a802fff538a952f9099c7fccf0c32a5f452227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 13:41:13 np0005533938 systemd[1]: Started libpod-conmon-d3a6e5bd6e49e5b16986ced264a802fff538a952f9099c7fccf0c32a5f452227.scope.
Nov 24 13:41:13 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:41:13 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea473da794bbcc08bb7188494098b1af7e3e10be298484e9341b074cb6aa61b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:41:13 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea473da794bbcc08bb7188494098b1af7e3e10be298484e9341b074cb6aa61b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:41:13 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea473da794bbcc08bb7188494098b1af7e3e10be298484e9341b074cb6aa61b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:41:13 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea473da794bbcc08bb7188494098b1af7e3e10be298484e9341b074cb6aa61b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:41:13 np0005533938 podman[250562]: 2025-11-24 18:41:13.250112368 +0000 UTC m=+0.026697123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:41:13 np0005533938 podman[250562]: 2025-11-24 18:41:13.344884425 +0000 UTC m=+0.121469160 container init d3a6e5bd6e49e5b16986ced264a802fff538a952f9099c7fccf0c32a5f452227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 13:41:13 np0005533938 podman[250562]: 2025-11-24 18:41:13.350624077 +0000 UTC m=+0.127208812 container start d3a6e5bd6e49e5b16986ced264a802fff538a952f9099c7fccf0c32a5f452227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_raman, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:41:13 np0005533938 podman[250562]: 2025-11-24 18:41:13.353675873 +0000 UTC m=+0.130260608 container attach d3a6e5bd6e49e5b16986ced264a802fff538a952f9099c7fccf0c32a5f452227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_raman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:41:13 np0005533938 python3.9[250556]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 13:41:13 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:14 np0005533938 zen_raman[250579]: {
Nov 24 13:41:14 np0005533938 zen_raman[250579]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:41:14 np0005533938 zen_raman[250579]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:41:14 np0005533938 zen_raman[250579]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:41:14 np0005533938 zen_raman[250579]:        "osd_id": 0,
Nov 24 13:41:14 np0005533938 zen_raman[250579]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:41:14 np0005533938 zen_raman[250579]:        "type": "bluestore"
Nov 24 13:41:14 np0005533938 zen_raman[250579]:    },
Nov 24 13:41:14 np0005533938 zen_raman[250579]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:41:14 np0005533938 zen_raman[250579]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:41:14 np0005533938 zen_raman[250579]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:41:14 np0005533938 zen_raman[250579]:        "osd_id": 1,
Nov 24 13:41:14 np0005533938 zen_raman[250579]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:41:14 np0005533938 zen_raman[250579]:        "type": "bluestore"
Nov 24 13:41:14 np0005533938 zen_raman[250579]:    },
Nov 24 13:41:14 np0005533938 zen_raman[250579]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:41:14 np0005533938 zen_raman[250579]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:41:14 np0005533938 zen_raman[250579]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:41:14 np0005533938 zen_raman[250579]:        "osd_id": 2,
Nov 24 13:41:14 np0005533938 zen_raman[250579]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:41:14 np0005533938 zen_raman[250579]:        "type": "bluestore"
Nov 24 13:41:14 np0005533938 zen_raman[250579]:    }
Nov 24 13:41:14 np0005533938 zen_raman[250579]: }
Nov 24 13:41:14 np0005533938 systemd[1]: libpod-d3a6e5bd6e49e5b16986ced264a802fff538a952f9099c7fccf0c32a5f452227.scope: Deactivated successfully.
Nov 24 13:41:14 np0005533938 podman[250562]: 2025-11-24 18:41:14.266696105 +0000 UTC m=+1.043280840 container died d3a6e5bd6e49e5b16986ced264a802fff538a952f9099c7fccf0c32a5f452227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_raman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 13:41:14 np0005533938 systemd[1]: var-lib-containers-storage-overlay-7ea473da794bbcc08bb7188494098b1af7e3e10be298484e9341b074cb6aa61b-merged.mount: Deactivated successfully.
Nov 24 13:41:14 np0005533938 python3.9[250746]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 24 13:41:14 np0005533938 podman[250562]: 2025-11-24 18:41:14.314250463 +0000 UTC m=+1.090835198 container remove d3a6e5bd6e49e5b16986ced264a802fff538a952f9099c7fccf0c32a5f452227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:41:14 np0005533938 systemd[1]: libpod-conmon-d3a6e5bd6e49e5b16986ced264a802fff538a952f9099c7fccf0c32a5f452227.scope: Deactivated successfully.
Nov 24 13:41:14 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:41:14 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:41:14 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:41:14 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:41:14 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 15b106ed-7f40-4254-9528-f4d7022764c1 does not exist
Nov 24 13:41:14 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 72eaa573-16f4-4ab7-9018-4fe8e4ddd83e does not exist
Nov 24 13:41:15 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:41:15 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:41:15 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:15 np0005533938 python3[251004]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 13:41:17 np0005533938 podman[251016]: 2025-11-24 18:41:17.079030526 +0000 UTC m=+1.153511749 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 24 13:41:17 np0005533938 podman[251075]: 2025-11-24 18:41:17.199575111 +0000 UTC m=+0.038932305 container create e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 24 13:41:17 np0005533938 podman[251075]: 2025-11-24 18:41:17.17932022 +0000 UTC m=+0.018677414 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 24 13:41:17 np0005533938 python3[251004]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 24 13:41:17 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:41:17 np0005533938 python3.9[251263]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:41:18 np0005533938 python3.9[251417]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:41:19 np0005533938 python3.9[251493]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:41:19 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:20 np0005533938 python3.9[251644]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764009679.2710717-550-48659998448379/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:41:20 np0005533938 python3.9[251720]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 13:41:20 np0005533938 systemd[1]: Reloading.
Nov 24 13:41:20 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:41:20 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:41:21 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:21 np0005533938 python3.9[251831]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:41:21 np0005533938 systemd[1]: Reloading.
Nov 24 13:41:21 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:41:21 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:41:21 np0005533938 systemd[1]: Starting multipathd container...
Nov 24 13:41:21 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:41:21 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd885d65a09ab530fdcefa9259171e234eb21e348f6a93688aff9a4d5f7c1db2/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 13:41:21 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd885d65a09ab530fdcefa9259171e234eb21e348f6a93688aff9a4d5f7c1db2/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 13:41:21 np0005533938 systemd[1]: Started /usr/bin/podman healthcheck run e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514.
Nov 24 13:41:21 np0005533938 podman[251871]: 2025-11-24 18:41:21.979622576 +0000 UTC m=+0.112662141 container init e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 13:41:21 np0005533938 multipathd[251886]: + sudo -E kolla_set_configs
Nov 24 13:41:22 np0005533938 podman[251871]: 2025-11-24 18:41:22.006886041 +0000 UTC m=+0.139925596 container start e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 13:41:22 np0005533938 podman[251871]: multipathd
Nov 24 13:41:22 np0005533938 systemd[1]: Started multipathd container.
Nov 24 13:41:22 np0005533938 multipathd[251886]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 13:41:22 np0005533938 multipathd[251886]: INFO:__main__:Validating config file
Nov 24 13:41:22 np0005533938 multipathd[251886]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 13:41:22 np0005533938 multipathd[251886]: INFO:__main__:Writing out command to execute
Nov 24 13:41:22 np0005533938 multipathd[251886]: ++ cat /run_command
Nov 24 13:41:22 np0005533938 multipathd[251886]: + CMD='/usr/sbin/multipathd -d'
Nov 24 13:41:22 np0005533938 multipathd[251886]: + ARGS=
Nov 24 13:41:22 np0005533938 multipathd[251886]: + sudo kolla_copy_cacerts
Nov 24 13:41:22 np0005533938 podman[251893]: 2025-11-24 18:41:22.078338531 +0000 UTC m=+0.052858170 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 13:41:22 np0005533938 multipathd[251886]: + [[ ! -n '' ]]
Nov 24 13:41:22 np0005533938 multipathd[251886]: + . kolla_extend_start
Nov 24 13:41:22 np0005533938 multipathd[251886]: Running command: '/usr/sbin/multipathd -d'
Nov 24 13:41:22 np0005533938 multipathd[251886]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 24 13:41:22 np0005533938 multipathd[251886]: + umask 0022
Nov 24 13:41:22 np0005533938 multipathd[251886]: + exec /usr/sbin/multipathd -d
Nov 24 13:41:22 np0005533938 systemd[1]: e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514-377f3d61fd3065a5.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 13:41:22 np0005533938 systemd[1]: e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514-377f3d61fd3065a5.service: Failed with result 'exit-code'.
Nov 24 13:41:22 np0005533938 multipathd[251886]: 3413.787407 | --------start up--------
Nov 24 13:41:22 np0005533938 multipathd[251886]: 3413.787427 | read /etc/multipath.conf
Nov 24 13:41:22 np0005533938 multipathd[251886]: 3413.792799 | path checkers start up
Nov 24 13:41:22 np0005533938 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 24 13:41:22 np0005533938 python3.9[252077]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:41:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:41:22.733 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:41:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:41:22.733 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:41:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:41:22.733 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:41:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:41:23 np0005533938 podman[252204]: 2025-11-24 18:41:23.249076526 +0000 UTC m=+0.101599247 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 24 13:41:23 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:23 np0005533938 python3.9[252256]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:41:23 np0005533938 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 24 13:41:24 np0005533938 python3.9[252426]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 13:41:24 np0005533938 systemd[1]: Stopping multipathd container...
Nov 24 13:41:24 np0005533938 multipathd[251886]: 3416.110390 | exit (signal)
Nov 24 13:41:24 np0005533938 multipathd[251886]: 3416.110892 | --------shut down-------
Nov 24 13:41:24 np0005533938 systemd[1]: libpod-e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514.scope: Deactivated successfully.
Nov 24 13:41:24 np0005533938 podman[252430]: 2025-11-24 18:41:24.459037202 +0000 UTC m=+0.085261483 container died e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 24 13:41:24 np0005533938 systemd[1]: e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514-377f3d61fd3065a5.timer: Deactivated successfully.
Nov 24 13:41:24 np0005533938 systemd[1]: Stopped /usr/bin/podman healthcheck run e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514.
Nov 24 13:41:24 np0005533938 systemd[1]: var-lib-containers-storage-overlay-dd885d65a09ab530fdcefa9259171e234eb21e348f6a93688aff9a4d5f7c1db2-merged.mount: Deactivated successfully.
Nov 24 13:41:24 np0005533938 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514-userdata-shm.mount: Deactivated successfully.
Nov 24 13:41:24 np0005533938 podman[252430]: 2025-11-24 18:41:24.657083547 +0000 UTC m=+0.283307838 container cleanup e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 24 13:41:24 np0005533938 podman[252430]: multipathd
Nov 24 13:41:24 np0005533938 podman[252459]: multipathd
Nov 24 13:41:24 np0005533938 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 24 13:41:24 np0005533938 systemd[1]: Stopped multipathd container.
Nov 24 13:41:24 np0005533938 systemd[1]: Starting multipathd container...
Nov 24 13:41:24 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:41:24 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd885d65a09ab530fdcefa9259171e234eb21e348f6a93688aff9a4d5f7c1db2/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 13:41:24 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd885d65a09ab530fdcefa9259171e234eb21e348f6a93688aff9a4d5f7c1db2/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 13:41:24 np0005533938 systemd[1]: Started /usr/bin/podman healthcheck run e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514.
Nov 24 13:41:24 np0005533938 podman[252472]: 2025-11-24 18:41:24.862422072 +0000 UTC m=+0.111625755 container init e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:41:24 np0005533938 multipathd[252487]: + sudo -E kolla_set_configs
Nov 24 13:41:24 np0005533938 podman[252472]: 2025-11-24 18:41:24.891215695 +0000 UTC m=+0.140419378 container start e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 24 13:41:24 np0005533938 podman[252472]: multipathd
Nov 24 13:41:24 np0005533938 systemd[1]: Started multipathd container.
Nov 24 13:41:24 np0005533938 multipathd[252487]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 13:41:24 np0005533938 multipathd[252487]: INFO:__main__:Validating config file
Nov 24 13:41:24 np0005533938 multipathd[252487]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 13:41:24 np0005533938 multipathd[252487]: INFO:__main__:Writing out command to execute
Nov 24 13:41:24 np0005533938 multipathd[252487]: ++ cat /run_command
Nov 24 13:41:24 np0005533938 multipathd[252487]: + CMD='/usr/sbin/multipathd -d'
Nov 24 13:41:24 np0005533938 multipathd[252487]: + ARGS=
Nov 24 13:41:24 np0005533938 multipathd[252487]: + sudo kolla_copy_cacerts
Nov 24 13:41:24 np0005533938 podman[252494]: 2025-11-24 18:41:24.967582067 +0000 UTC m=+0.066781085 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 13:41:24 np0005533938 multipathd[252487]: + [[ ! -n '' ]]
Nov 24 13:41:24 np0005533938 multipathd[252487]: + . kolla_extend_start
Nov 24 13:41:24 np0005533938 multipathd[252487]: Running command: '/usr/sbin/multipathd -d'
Nov 24 13:41:24 np0005533938 multipathd[252487]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 24 13:41:24 np0005533938 multipathd[252487]: + umask 0022
Nov 24 13:41:24 np0005533938 multipathd[252487]: + exec /usr/sbin/multipathd -d
Nov 24 13:41:24 np0005533938 systemd[1]: e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514-75097967f463f0eb.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 13:41:24 np0005533938 systemd[1]: e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514-75097967f463f0eb.service: Failed with result 'exit-code'.
Nov 24 13:41:24 np0005533938 multipathd[252487]: 3416.682967 | --------start up--------
Nov 24 13:41:24 np0005533938 multipathd[252487]: 3416.682983 | read /etc/multipath.conf
Nov 24 13:41:24 np0005533938 multipathd[252487]: 3416.687791 | path checkers start up
Nov 24 13:41:25 np0005533938 podman[252526]: 2025-11-24 18:41:25.045120497 +0000 UTC m=+0.050460941 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 24 13:41:25 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:25 np0005533938 python3.9[252697]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:41:26 np0005533938 python3.9[252849]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 24 13:41:27 np0005533938 python3.9[253001]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 24 13:41:27 np0005533938 kernel: Key type psk registered
Nov 24 13:41:27 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:41:28 np0005533938 python3.9[253164]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:41:28 np0005533938 python3.9[253287]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764009687.5526044-630-175418874463445/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:41:29 np0005533938 python3.9[253439]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:41:29 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:30 np0005533938 python3.9[253591]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 13:41:30 np0005533938 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 24 13:41:30 np0005533938 systemd[1]: Stopped Load Kernel Modules.
Nov 24 13:41:30 np0005533938 systemd[1]: Stopping Load Kernel Modules...
Nov 24 13:41:30 np0005533938 systemd[1]: Starting Load Kernel Modules...
Nov 24 13:41:30 np0005533938 systemd[1]: Finished Load Kernel Modules.
Nov 24 13:41:31 np0005533938 python3.9[253747]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 24 13:41:31 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:41:33 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:33 np0005533938 systemd[1]: Reloading.
Nov 24 13:41:33 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:41:33 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:41:33 np0005533938 systemd[1]: Reloading.
Nov 24 13:41:34 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:41:34 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:41:34 np0005533938 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 24 13:41:34 np0005533938 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 24 13:41:34 np0005533938 systemd-logind[822]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 24 13:41:34 np0005533938 systemd-logind[822]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 24 13:41:34 np0005533938 lvm[253863]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 24 13:41:34 np0005533938 lvm[253863]: VG ceph_vg1 finished
Nov 24 13:41:34 np0005533938 lvm[253861]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 13:41:34 np0005533938 lvm[253861]: VG ceph_vg0 finished
Nov 24 13:41:34 np0005533938 lvm[253864]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 24 13:41:34 np0005533938 lvm[253864]: VG ceph_vg2 finished
Nov 24 13:41:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:41:34
Nov 24 13:41:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:41:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:41:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['vms', '.mgr', 'volumes', 'backups', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'images']
Nov 24 13:41:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:41:34 np0005533938 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 24 13:41:34 np0005533938 systemd[1]: Starting man-db-cache-update.service...
Nov 24 13:41:34 np0005533938 systemd[1]: Reloading.
Nov 24 13:41:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:41:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:41:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:41:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:41:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:41:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:41:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:41:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:41:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:41:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:41:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:41:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:41:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:41:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:41:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:41:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:41:34 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:41:34 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:41:35 np0005533938 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 24 13:41:35 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:36 np0005533938 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 24 13:41:36 np0005533938 systemd[1]: Finished man-db-cache-update.service.
Nov 24 13:41:36 np0005533938 systemd[1]: man-db-cache-update.service: Consumed 1.580s CPU time.
Nov 24 13:41:36 np0005533938 systemd[1]: run-r64ca1d792e1a484b90df61e83628ed26.service: Deactivated successfully.
Nov 24 13:41:36 np0005533938 python3.9[255205]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 13:41:36 np0005533938 systemd[1]: Stopping Open-iSCSI...
Nov 24 13:41:36 np0005533938 iscsid[242667]: iscsid shutting down.
Nov 24 13:41:36 np0005533938 systemd[1]: iscsid.service: Deactivated successfully.
Nov 24 13:41:36 np0005533938 systemd[1]: Stopped Open-iSCSI.
Nov 24 13:41:36 np0005533938 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 24 13:41:36 np0005533938 systemd[1]: Starting Open-iSCSI...
Nov 24 13:41:36 np0005533938 systemd[1]: Started Open-iSCSI.
Nov 24 13:41:37 np0005533938 python3.9[255360]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 24 13:41:37 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:41:38 np0005533938 python3.9[255516]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:41:39 np0005533938 python3.9[255668]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 13:41:39 np0005533938 systemd[1]: Reloading.
Nov 24 13:41:39 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:41:39 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:41:39 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:40 np0005533938 python3.9[255852]: ansible-ansible.builtin.service_facts Invoked
Nov 24 13:41:40 np0005533938 network[255869]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 24 13:41:40 np0005533938 network[255870]: 'network-scripts' will be removed from distribution in near future.
Nov 24 13:41:40 np0005533938 network[255871]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 24 13:41:41 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:41:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:41:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:41:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:41:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:41:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:41:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:41:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:41:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:41:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:41:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:41:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:41:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:41:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:41:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:41:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:41:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:41:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:41:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:41:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:41:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:41:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:41:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:41:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:41:43 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:45 np0005533938 python3.9[256146]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:41:45 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:45 np0005533938 python3.9[256299]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:41:46 np0005533938 python3.9[256452]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:41:47 np0005533938 python3.9[256605]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:41:47 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:41:47 np0005533938 python3.9[256758]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:41:48 np0005533938 python3.9[256911]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:41:49 np0005533938 python3.9[257064]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:41:49 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:50 np0005533938 python3.9[257217]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:41:51 np0005533938 python3.9[257370]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:41:51 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:51 np0005533938 python3.9[257522]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:41:52 np0005533938 python3.9[257674]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:41:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:41:52 np0005533938 python3.9[257826]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:41:53 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:53 np0005533938 podman[257950]: 2025-11-24 18:41:53.476716485 +0000 UTC m=+0.082066453 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 24 13:41:53 np0005533938 python3.9[257996]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:41:54 np0005533938 python3.9[258157]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:41:54 np0005533938 python3.9[258309]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:41:55 np0005533938 podman[258433]: 2025-11-24 18:41:55.276320536 +0000 UTC m=+0.056161382 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 24 13:41:55 np0005533938 podman[258434]: 2025-11-24 18:41:55.302552455 +0000 UTC m=+0.075978213 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 13:41:55 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:55 np0005533938 python3.9[258497]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:41:56 np0005533938 python3.9[258653]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:41:56 np0005533938 python3.9[258805]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:41:57 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:57 np0005533938 python3.9[258957]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:41:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:41:58 np0005533938 python3.9[259109]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:41:58 np0005533938 python3.9[259261]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:41:59 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:41:59 np0005533938 python3.9[259413]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:42:00 np0005533938 python3.9[259566]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:42:01 np0005533938 python3.9[259718]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:42:01 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:42:01 np0005533938 python3.9[259870]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:42:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:42:02 np0005533938 python3.9[260022]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 24 13:42:03 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:42:03 np0005533938 python3.9[260174]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 13:42:03 np0005533938 systemd[1]: Reloading.
Nov 24 13:42:03 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:42:03 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:42:04 np0005533938 python3.9[260362]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:42:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:42:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:42:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:42:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:42:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:42:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:42:05 np0005533938 python3.9[260515]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:42:05 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 13:42:05 np0005533938 python3.9[260668]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:42:06 np0005533938 python3.9[260821]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:42:07 np0005533938 python3.9[260974]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:42:07 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 13:42:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:42:07 np0005533938 python3.9[261127]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:42:08 np0005533938 python3.9[261280]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:42:09 np0005533938 python3.9[261433]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 24 13:42:09 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 13:42:10 np0005533938 python3.9[261586]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:42:10 np0005533938 python3.9[261738]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:42:11 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 13:42:11 np0005533938 python3.9[261890]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:42:12 np0005533938 python3.9[262042]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:42:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:42:13 np0005533938 python3.9[262194]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:42:13 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 13:42:13 np0005533938 python3.9[262346]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:42:14 np0005533938 python3.9[262498]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:42:14 np0005533938 python3.9[262748]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:42:15 np0005533938 podman[262871]: 2025-11-24 18:42:15.122194465 +0000 UTC m=+0.049653876 container exec 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 13:42:15 np0005533938 podman[262871]: 2025-11-24 18:42:15.216200023 +0000 UTC m=+0.143659464 container exec_died 6770cfc50a03556511a4d098328da28e11fe7bfb5829310d8693bfdc61b2966d (image=quay.io/ceph/ceph:v18, name=ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mon-compute-0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 13:42:15 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 24 13:42:15 np0005533938 python3.9[263022]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:42:15 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:42:15 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:42:15 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:42:15 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:42:16 np0005533938 python3.9[263346]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:42:16 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:42:16 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:42:16 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:42:16 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:42:16 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:42:16 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:42:16 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 7f1d48bf-401e-4271-b064-4466b7a62d19 does not exist
Nov 24 13:42:16 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev cacdd1c3-1346-4194-be4f-33d6d16d8e7f does not exist
Nov 24 13:42:16 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 6366fb20-a392-4a5d-9f54-5983066d2d96 does not exist
Nov 24 13:42:16 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:42:16 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:42:16 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:42:16 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:42:16 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:42:16 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:42:16 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:42:16 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:42:16 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:42:16 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:42:16 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:42:17 np0005533938 podman[263580]: 2025-11-24 18:42:17.097291906 +0000 UTC m=+0.041055532 container create 3b5d68748547c315d4145b6a4903801b25c89bbbb299bfbd8e1597303b5f6236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rubin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 13:42:17 np0005533938 systemd[1]: Started libpod-conmon-3b5d68748547c315d4145b6a4903801b25c89bbbb299bfbd8e1597303b5f6236.scope.
Nov 24 13:42:17 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:42:17 np0005533938 podman[263580]: 2025-11-24 18:42:17.078322514 +0000 UTC m=+0.022086130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:42:17 np0005533938 podman[263580]: 2025-11-24 18:42:17.179324346 +0000 UTC m=+0.123087942 container init 3b5d68748547c315d4145b6a4903801b25c89bbbb299bfbd8e1597303b5f6236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 13:42:17 np0005533938 podman[263580]: 2025-11-24 18:42:17.190361591 +0000 UTC m=+0.134125187 container start 3b5d68748547c315d4145b6a4903801b25c89bbbb299bfbd8e1597303b5f6236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 13:42:17 np0005533938 podman[263580]: 2025-11-24 18:42:17.193643662 +0000 UTC m=+0.137407268 container attach 3b5d68748547c315d4145b6a4903801b25c89bbbb299bfbd8e1597303b5f6236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:42:17 np0005533938 vibrant_rubin[263596]: 167 167
Nov 24 13:42:17 np0005533938 systemd[1]: libpod-3b5d68748547c315d4145b6a4903801b25c89bbbb299bfbd8e1597303b5f6236.scope: Deactivated successfully.
Nov 24 13:42:17 np0005533938 podman[263580]: 2025-11-24 18:42:17.196957765 +0000 UTC m=+0.140721431 container died 3b5d68748547c315d4145b6a4903801b25c89bbbb299bfbd8e1597303b5f6236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rubin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:42:17 np0005533938 systemd[1]: var-lib-containers-storage-overlay-65a9f1068ac774feea142d8707b582cf6f5609770c753331f44eb763be292fa4-merged.mount: Deactivated successfully.
Nov 24 13:42:17 np0005533938 podman[263580]: 2025-11-24 18:42:17.251320947 +0000 UTC m=+0.195084573 container remove 3b5d68748547c315d4145b6a4903801b25c89bbbb299bfbd8e1597303b5f6236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rubin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:42:17 np0005533938 systemd[1]: libpod-conmon-3b5d68748547c315d4145b6a4903801b25c89bbbb299bfbd8e1597303b5f6236.scope: Deactivated successfully.
Nov 24 13:42:17 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:42:17 np0005533938 podman[263620]: 2025-11-24 18:42:17.445547086 +0000 UTC m=+0.040295983 container create 949789d9bd278eb30418bbaf05638c36cf855e75346508b0ce68897a5deb8d83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_rosalind, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:42:17 np0005533938 systemd[1]: Started libpod-conmon-949789d9bd278eb30418bbaf05638c36cf855e75346508b0ce68897a5deb8d83.scope.
Nov 24 13:42:17 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:42:17 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9aefbe4ff8aa87020c4ced85062f3cb35c90f4f458c3d7fe0a4e19eb3bb100/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:42:17 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9aefbe4ff8aa87020c4ced85062f3cb35c90f4f458c3d7fe0a4e19eb3bb100/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:42:17 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9aefbe4ff8aa87020c4ced85062f3cb35c90f4f458c3d7fe0a4e19eb3bb100/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:42:17 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9aefbe4ff8aa87020c4ced85062f3cb35c90f4f458c3d7fe0a4e19eb3bb100/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:42:17 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9aefbe4ff8aa87020c4ced85062f3cb35c90f4f458c3d7fe0a4e19eb3bb100/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:42:17 np0005533938 podman[263620]: 2025-11-24 18:42:17.431307772 +0000 UTC m=+0.026056689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:42:17 np0005533938 podman[263620]: 2025-11-24 18:42:17.530184731 +0000 UTC m=+0.124933628 container init 949789d9bd278eb30418bbaf05638c36cf855e75346508b0ce68897a5deb8d83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:42:17 np0005533938 podman[263620]: 2025-11-24 18:42:17.541250286 +0000 UTC m=+0.135999183 container start 949789d9bd278eb30418bbaf05638c36cf855e75346508b0ce68897a5deb8d83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 13:42:17 np0005533938 podman[263620]: 2025-11-24 18:42:17.544815045 +0000 UTC m=+0.139563942 container attach 949789d9bd278eb30418bbaf05638c36cf855e75346508b0ce68897a5deb8d83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_rosalind, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:42:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:42:18 np0005533938 dreamy_rosalind[263637]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:42:18 np0005533938 dreamy_rosalind[263637]: --> relative data size: 1.0
Nov 24 13:42:18 np0005533938 dreamy_rosalind[263637]: --> All data devices are unavailable
Nov 24 13:42:18 np0005533938 systemd[1]: libpod-949789d9bd278eb30418bbaf05638c36cf855e75346508b0ce68897a5deb8d83.scope: Deactivated successfully.
Nov 24 13:42:18 np0005533938 podman[263620]: 2025-11-24 18:42:18.576532344 +0000 UTC m=+1.171281251 container died 949789d9bd278eb30418bbaf05638c36cf855e75346508b0ce68897a5deb8d83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:42:18 np0005533938 systemd[1]: var-lib-containers-storage-overlay-ce9aefbe4ff8aa87020c4ced85062f3cb35c90f4f458c3d7fe0a4e19eb3bb100-merged.mount: Deactivated successfully.
Nov 24 13:42:18 np0005533938 podman[263620]: 2025-11-24 18:42:18.631107421 +0000 UTC m=+1.225856318 container remove 949789d9bd278eb30418bbaf05638c36cf855e75346508b0ce68897a5deb8d83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_rosalind, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:42:18 np0005533938 systemd[1]: libpod-conmon-949789d9bd278eb30418bbaf05638c36cf855e75346508b0ce68897a5deb8d83.scope: Deactivated successfully.
Nov 24 13:42:19 np0005533938 podman[263817]: 2025-11-24 18:42:19.145419693 +0000 UTC m=+0.043469573 container create d4e6db603973ff7748814b276e8ba3504e1bcf50b179f592e12c6f6613df348f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_snyder, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:42:19 np0005533938 systemd[1]: Started libpod-conmon-d4e6db603973ff7748814b276e8ba3504e1bcf50b179f592e12c6f6613df348f.scope.
Nov 24 13:42:19 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:42:19 np0005533938 podman[263817]: 2025-11-24 18:42:19.208948112 +0000 UTC m=+0.106998022 container init d4e6db603973ff7748814b276e8ba3504e1bcf50b179f592e12c6f6613df348f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:42:19 np0005533938 podman[263817]: 2025-11-24 18:42:19.214551572 +0000 UTC m=+0.112601472 container start d4e6db603973ff7748814b276e8ba3504e1bcf50b179f592e12c6f6613df348f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_snyder, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:42:19 np0005533938 magical_snyder[263833]: 167 167
Nov 24 13:42:19 np0005533938 podman[263817]: 2025-11-24 18:42:19.219273639 +0000 UTC m=+0.117323579 container attach d4e6db603973ff7748814b276e8ba3504e1bcf50b179f592e12c6f6613df348f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_snyder, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:42:19 np0005533938 systemd[1]: libpod-d4e6db603973ff7748814b276e8ba3504e1bcf50b179f592e12c6f6613df348f.scope: Deactivated successfully.
Nov 24 13:42:19 np0005533938 podman[263817]: 2025-11-24 18:42:19.220665164 +0000 UTC m=+0.118715054 container died d4e6db603973ff7748814b276e8ba3504e1bcf50b179f592e12c6f6613df348f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_snyder, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:42:19 np0005533938 podman[263817]: 2025-11-24 18:42:19.125412765 +0000 UTC m=+0.023462705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:42:19 np0005533938 systemd[1]: var-lib-containers-storage-overlay-e8c9a5e19b7c3637e9137baa1fa2b57999dc70d33b483a567941221dcafb7758-merged.mount: Deactivated successfully.
Nov 24 13:42:19 np0005533938 podman[263817]: 2025-11-24 18:42:19.257396237 +0000 UTC m=+0.155446127 container remove d4e6db603973ff7748814b276e8ba3504e1bcf50b179f592e12c6f6613df348f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_snyder, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 13:42:19 np0005533938 systemd[1]: libpod-conmon-d4e6db603973ff7748814b276e8ba3504e1bcf50b179f592e12c6f6613df348f.scope: Deactivated successfully.
Nov 24 13:42:19 np0005533938 podman[263856]: 2025-11-24 18:42:19.395753488 +0000 UTC m=+0.036348465 container create d7a11960a965612fef298c6e50af4ed5a494f6ec922b956ecf7b7f6c0680f10a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_noyce, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:42:19 np0005533938 systemd[1]: Started libpod-conmon-d7a11960a965612fef298c6e50af4ed5a494f6ec922b956ecf7b7f6c0680f10a.scope.
Nov 24 13:42:19 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:42:19 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:42:19 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7eb860d187d3d4983ec7e76176419150222c504febb01c53fbcaffac99b2fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:42:19 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7eb860d187d3d4983ec7e76176419150222c504febb01c53fbcaffac99b2fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:42:19 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7eb860d187d3d4983ec7e76176419150222c504febb01c53fbcaffac99b2fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:42:19 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7eb860d187d3d4983ec7e76176419150222c504febb01c53fbcaffac99b2fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:42:19 np0005533938 podman[263856]: 2025-11-24 18:42:19.475148523 +0000 UTC m=+0.115743520 container init d7a11960a965612fef298c6e50af4ed5a494f6ec922b956ecf7b7f6c0680f10a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_noyce, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 13:42:19 np0005533938 podman[263856]: 2025-11-24 18:42:19.380912629 +0000 UTC m=+0.021507626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:42:19 np0005533938 podman[263856]: 2025-11-24 18:42:19.488336151 +0000 UTC m=+0.128931128 container start d7a11960a965612fef298c6e50af4ed5a494f6ec922b956ecf7b7f6c0680f10a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_noyce, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 13:42:19 np0005533938 podman[263856]: 2025-11-24 18:42:19.495171741 +0000 UTC m=+0.135766738 container attach d7a11960a965612fef298c6e50af4ed5a494f6ec922b956ecf7b7f6c0680f10a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]: {
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:    "0": [
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:        {
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "devices": [
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "/dev/loop3"
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            ],
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "lv_name": "ceph_lv0",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "lv_size": "21470642176",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "name": "ceph_lv0",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "tags": {
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.cluster_name": "ceph",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.crush_device_class": "",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.encrypted": "0",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.osd_id": "0",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.type": "block",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.vdo": "0"
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            },
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "type": "block",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "vg_name": "ceph_vg0"
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:        }
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:    ],
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:    "1": [
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:        {
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "devices": [
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "/dev/loop4"
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            ],
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "lv_name": "ceph_lv1",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "lv_size": "21470642176",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "name": "ceph_lv1",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "tags": {
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.cluster_name": "ceph",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.crush_device_class": "",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.encrypted": "0",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.osd_id": "1",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.type": "block",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.vdo": "0"
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            },
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "type": "block",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "vg_name": "ceph_vg1"
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:        }
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:    ],
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:    "2": [
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:        {
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "devices": [
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "/dev/loop5"
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            ],
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "lv_name": "ceph_lv2",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "lv_size": "21470642176",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "name": "ceph_lv2",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "tags": {
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.cluster_name": "ceph",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.crush_device_class": "",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.encrypted": "0",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.osd_id": "2",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.type": "block",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:                "ceph.vdo": "0"
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            },
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "type": "block",
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:            "vg_name": "ceph_vg2"
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:        }
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]:    ]
Nov 24 13:42:20 np0005533938 amazing_noyce[263872]: }
Nov 24 13:42:20 np0005533938 systemd[1]: libpod-d7a11960a965612fef298c6e50af4ed5a494f6ec922b956ecf7b7f6c0680f10a.scope: Deactivated successfully.
Nov 24 13:42:20 np0005533938 podman[263856]: 2025-11-24 18:42:20.243861651 +0000 UTC m=+0.884456638 container died d7a11960a965612fef298c6e50af4ed5a494f6ec922b956ecf7b7f6c0680f10a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_noyce, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:42:20 np0005533938 systemd[1]: var-lib-containers-storage-overlay-2a7eb860d187d3d4983ec7e76176419150222c504febb01c53fbcaffac99b2fa-merged.mount: Deactivated successfully.
Nov 24 13:42:20 np0005533938 podman[263856]: 2025-11-24 18:42:20.297029593 +0000 UTC m=+0.937624560 container remove d7a11960a965612fef298c6e50af4ed5a494f6ec922b956ecf7b7f6c0680f10a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_noyce, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:42:20 np0005533938 systemd[1]: libpod-conmon-d7a11960a965612fef298c6e50af4ed5a494f6ec922b956ecf7b7f6c0680f10a.scope: Deactivated successfully.
Nov 24 13:42:20 np0005533938 podman[264035]: 2025-11-24 18:42:20.820549453 +0000 UTC m=+0.037204636 container create 1afec1a5641165401c6dd150d4a2973812343ebe88c82613a3294d545338fbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:42:20 np0005533938 systemd[1]: Started libpod-conmon-1afec1a5641165401c6dd150d4a2973812343ebe88c82613a3294d545338fbc2.scope.
Nov 24 13:42:20 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:42:20 np0005533938 podman[264035]: 2025-11-24 18:42:20.891044916 +0000 UTC m=+0.107700119 container init 1afec1a5641165401c6dd150d4a2973812343ebe88c82613a3294d545338fbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_keldysh, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:42:20 np0005533938 podman[264035]: 2025-11-24 18:42:20.896144243 +0000 UTC m=+0.112799426 container start 1afec1a5641165401c6dd150d4a2973812343ebe88c82613a3294d545338fbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_keldysh, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 13:42:20 np0005533938 podman[264035]: 2025-11-24 18:42:20.804285469 +0000 UTC m=+0.020940682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:42:20 np0005533938 vigilant_keldysh[264051]: 167 167
Nov 24 13:42:20 np0005533938 systemd[1]: libpod-1afec1a5641165401c6dd150d4a2973812343ebe88c82613a3294d545338fbc2.scope: Deactivated successfully.
Nov 24 13:42:20 np0005533938 conmon[264051]: conmon 1afec1a5641165401c6d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1afec1a5641165401c6dd150d4a2973812343ebe88c82613a3294d545338fbc2.scope/container/memory.events
Nov 24 13:42:20 np0005533938 podman[264035]: 2025-11-24 18:42:20.902925401 +0000 UTC m=+0.119580604 container attach 1afec1a5641165401c6dd150d4a2973812343ebe88c82613a3294d545338fbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 13:42:20 np0005533938 podman[264035]: 2025-11-24 18:42:20.903160577 +0000 UTC m=+0.119815760 container died 1afec1a5641165401c6dd150d4a2973812343ebe88c82613a3294d545338fbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:42:20 np0005533938 systemd[1]: var-lib-containers-storage-overlay-2fa2f869cfda5624a7ee04908005560d278295e9b273a996ca2ec1a4d9759adb-merged.mount: Deactivated successfully.
Nov 24 13:42:20 np0005533938 podman[264035]: 2025-11-24 18:42:20.941290535 +0000 UTC m=+0.157945728 container remove 1afec1a5641165401c6dd150d4a2973812343ebe88c82613a3294d545338fbc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 24 13:42:20 np0005533938 systemd[1]: libpod-conmon-1afec1a5641165401c6dd150d4a2973812343ebe88c82613a3294d545338fbc2.scope: Deactivated successfully.
Nov 24 13:42:21 np0005533938 podman[264074]: 2025-11-24 18:42:21.102960576 +0000 UTC m=+0.040839727 container create 78150cf2e376f227456599edbfccce341b069fe61975540abb16bcfff49099e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 13:42:21 np0005533938 systemd[1]: Started libpod-conmon-78150cf2e376f227456599edbfccce341b069fe61975540abb16bcfff49099e2.scope.
Nov 24 13:42:21 np0005533938 podman[264074]: 2025-11-24 18:42:21.081955944 +0000 UTC m=+0.019835115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:42:21 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:42:21 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6743cb0021fd12302919e7cbb915c5c14e8706d47a1e60fd48f8e20b7acbeff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:42:21 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6743cb0021fd12302919e7cbb915c5c14e8706d47a1e60fd48f8e20b7acbeff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:42:21 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6743cb0021fd12302919e7cbb915c5c14e8706d47a1e60fd48f8e20b7acbeff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:42:21 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6743cb0021fd12302919e7cbb915c5c14e8706d47a1e60fd48f8e20b7acbeff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:42:21 np0005533938 podman[264074]: 2025-11-24 18:42:21.239310857 +0000 UTC m=+0.177190028 container init 78150cf2e376f227456599edbfccce341b069fe61975540abb16bcfff49099e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lederberg, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 13:42:21 np0005533938 podman[264074]: 2025-11-24 18:42:21.246785133 +0000 UTC m=+0.184664284 container start 78150cf2e376f227456599edbfccce341b069fe61975540abb16bcfff49099e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:42:21 np0005533938 podman[264074]: 2025-11-24 18:42:21.26517251 +0000 UTC m=+0.203051691 container attach 78150cf2e376f227456599edbfccce341b069fe61975540abb16bcfff49099e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:42:21 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:42:21 np0005533938 python3.9[264223]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 24 13:42:22 np0005533938 infallible_lederberg[264134]: {
Nov 24 13:42:22 np0005533938 infallible_lederberg[264134]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:42:22 np0005533938 infallible_lederberg[264134]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:42:22 np0005533938 infallible_lederberg[264134]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:42:22 np0005533938 infallible_lederberg[264134]:        "osd_id": 0,
Nov 24 13:42:22 np0005533938 infallible_lederberg[264134]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:42:22 np0005533938 infallible_lederberg[264134]:        "type": "bluestore"
Nov 24 13:42:22 np0005533938 infallible_lederberg[264134]:    },
Nov 24 13:42:22 np0005533938 infallible_lederberg[264134]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:42:22 np0005533938 infallible_lederberg[264134]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:42:22 np0005533938 infallible_lederberg[264134]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:42:22 np0005533938 infallible_lederberg[264134]:        "osd_id": 1,
Nov 24 13:42:22 np0005533938 infallible_lederberg[264134]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:42:22 np0005533938 infallible_lederberg[264134]:        "type": "bluestore"
Nov 24 13:42:22 np0005533938 infallible_lederberg[264134]:    },
Nov 24 13:42:22 np0005533938 infallible_lederberg[264134]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:42:22 np0005533938 infallible_lederberg[264134]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:42:22 np0005533938 infallible_lederberg[264134]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:42:22 np0005533938 infallible_lederberg[264134]:        "osd_id": 2,
Nov 24 13:42:22 np0005533938 infallible_lederberg[264134]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:42:22 np0005533938 infallible_lederberg[264134]:        "type": "bluestore"
Nov 24 13:42:22 np0005533938 infallible_lederberg[264134]:    }
Nov 24 13:42:22 np0005533938 infallible_lederberg[264134]: }
Nov 24 13:42:22 np0005533938 systemd[1]: libpod-78150cf2e376f227456599edbfccce341b069fe61975540abb16bcfff49099e2.scope: Deactivated successfully.
Nov 24 13:42:22 np0005533938 podman[264074]: 2025-11-24 18:42:22.254734001 +0000 UTC m=+1.192613152 container died 78150cf2e376f227456599edbfccce341b069fe61975540abb16bcfff49099e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lederberg, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:42:22 np0005533938 systemd[1]: libpod-78150cf2e376f227456599edbfccce341b069fe61975540abb16bcfff49099e2.scope: Consumed 1.012s CPU time.
Nov 24 13:42:22 np0005533938 systemd[1]: var-lib-containers-storage-overlay-a6743cb0021fd12302919e7cbb915c5c14e8706d47a1e60fd48f8e20b7acbeff-merged.mount: Deactivated successfully.
Nov 24 13:42:22 np0005533938 podman[264074]: 2025-11-24 18:42:22.31178602 +0000 UTC m=+1.249665171 container remove 78150cf2e376f227456599edbfccce341b069fe61975540abb16bcfff49099e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Nov 24 13:42:22 np0005533938 systemd[1]: libpod-conmon-78150cf2e376f227456599edbfccce341b069fe61975540abb16bcfff49099e2.scope: Deactivated successfully.
Nov 24 13:42:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:42:22 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:42:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:42:22 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:42:22 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 5e4e65c7-8e6b-42df-ab6f-24c7dcc32b3a does not exist
Nov 24 13:42:22 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 0b772f2a-8585-4528-bb59-4b0c8bbeffe7 does not exist
Nov 24 13:42:22 np0005533938 python3.9[264416]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 24 13:42:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:42:22.734 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:42:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:42:22.735 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:42:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:42:22.735 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:42:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:42:23 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:42:23 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:42:23 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:42:23 np0005533938 python3.9[264624]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 24 13:42:23 np0005533938 podman[264626]: 2025-11-24 18:42:23.717700706 +0000 UTC m=+0.088704788 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller)
Nov 24 13:42:24 np0005533938 systemd-logind[822]: New session 53 of user zuul.
Nov 24 13:42:24 np0005533938 systemd[1]: Started Session 53 of User zuul.
Nov 24 13:42:24 np0005533938 systemd[1]: session-53.scope: Deactivated successfully.
Nov 24 13:42:24 np0005533938 systemd-logind[822]: Session 53 logged out. Waiting for processes to exit.
Nov 24 13:42:24 np0005533938 systemd-logind[822]: Removed session 53.
Nov 24 13:42:25 np0005533938 python3.9[264837]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:42:25 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:42:25 np0005533938 podman[264932]: 2025-11-24 18:42:25.761293879 +0000 UTC m=+0.049162684 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 13:42:25 np0005533938 podman[264933]: 2025-11-24 18:42:25.781734187 +0000 UTC m=+0.064348221 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 13:42:25 np0005533938 python3.9[264982]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009744.9251544-1249-275735206591146/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:42:26 np0005533938 python3.9[265147]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:42:27 np0005533938 python3.9[265223]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:42:27 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:42:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:42:27 np0005533938 python3.9[265373]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:42:28 np0005533938 python3.9[265494]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009747.2194283-1249-72392914024348/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:42:28 np0005533938 python3.9[265644]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:42:29 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:42:29 np0005533938 python3.9[265765]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009748.456035-1249-36468422848022/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:42:30 np0005533938 python3.9[265915]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:42:30 np0005533938 python3.9[266036]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009749.6552413-1249-182168737992786/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:42:31 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:42:31 np0005533938 python3.9[266186]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:42:32 np0005533938 python3.9[266307]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009751.1491458-1249-104742737783097/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:42:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:42:33 np0005533938 python3.9[266459]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:42:33 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:42:33 np0005533938 python3.9[266611]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:42:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:42:34
Nov 24 13:42:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:42:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:42:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'images', '.mgr', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'vms']
Nov 24 13:42:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:42:34 np0005533938 python3.9[266763]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:42:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:42:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:42:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:42:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:42:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:42:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:42:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:42:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:42:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:42:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:42:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:42:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:42:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:42:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:42:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:42:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:42:35 np0005533938 python3.9[266915]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:42:35 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:42:35 np0005533938 python3.9[267038]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764009754.7791407-1356-268345640860651/.source _original_basename=.nuuue61l follow=False checksum=ab11e1f197d206cb17585669c45c8e90deecfff1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 24 13:42:36 np0005533938 python3.9[267190]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:42:37 np0005533938 python3.9[267342]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:42:37 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:42:37 np0005533938 python3.9[267463]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009756.7429729-1382-128211473841834/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:42:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:42:38 np0005533938 python3.9[267613]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 24 13:42:38 np0005533938 python3.9[267734]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764009757.9065819-1397-138364876570183/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 24 13:42:39 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:42:39 np0005533938 python3.9[267886]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 24 13:42:40 np0005533938 python3.9[268038]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 13:42:41 np0005533938 python3[268190]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 13:42:41 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:42:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:42:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:42:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:42:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:42:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:42:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:42:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:42:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:42:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:42:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:42:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:42:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:42:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:42:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:42:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:42:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:42:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:42:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:42:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:42:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:42:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:42:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:42:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:42:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:42:43 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:42:45 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:42:47 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:42:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:42:49 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:42:50 np0005533938 podman[268205]: 2025-11-24 18:42:50.157342865 +0000 UTC m=+8.786662054 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 24 13:42:50 np0005533938 podman[268288]: 2025-11-24 18:42:50.306381962 +0000 UTC m=+0.049698767 container create 5e27af85292a9b40c3e4241abdb9b05f5a155fee134dfa070b318e923dd00f66 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 24 13:42:50 np0005533938 podman[268288]: 2025-11-24 18:42:50.279423731 +0000 UTC m=+0.022740596 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 24 13:42:50 np0005533938 python3[268190]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 24 13:42:51 np0005533938 python3.9[268477]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:42:51 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:42:52 np0005533938 python3.9[268631]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 24 13:42:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:42:53 np0005533938 python3.9[268783]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 24 13:42:53 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:42:53 np0005533938 podman[268935]: 2025-11-24 18:42:53.921844418 +0000 UTC m=+0.136239979 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 24 13:42:54 np0005533938 python3[268936]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 24 13:42:54 np0005533938 podman[268998]: 2025-11-24 18:42:54.359923813 +0000 UTC m=+0.052035655 container create 8bfccfbfd425066c99ff87323aabc0b5530ac34ab41ffeb77e223003743eba60 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible)
Nov 24 13:42:54 np0005533938 podman[268998]: 2025-11-24 18:42:54.333763992 +0000 UTC m=+0.025875874 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 24 13:42:54 np0005533938 python3[268936]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 24 13:42:55 np0005533938 python3.9[269188]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:42:55 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:42:55 np0005533938 podman[269314]: 2025-11-24 18:42:55.925083719 +0000 UTC m=+0.055227125 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 24 13:42:55 np0005533938 podman[269315]: 2025-11-24 18:42:55.947060775 +0000 UTC m=+0.077657572 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:42:56 np0005533938 python3.9[269380]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:42:56 np0005533938 python3.9[269531]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764009776.204409-1489-113297567629772/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 24 13:42:57 np0005533938 python3.9[269607]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 24 13:42:57 np0005533938 systemd[1]: Reloading.
Nov 24 13:42:57 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:42:57 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:42:57 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:42:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:42:58 np0005533938 python3.9[269718]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 24 13:42:58 np0005533938 systemd[1]: Reloading.
Nov 24 13:42:58 np0005533938 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 24 13:42:58 np0005533938 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 24 13:42:58 np0005533938 systemd[1]: Starting nova_compute container...
Nov 24 13:42:58 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:42:58 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0216af483d77ec471622a589be635ab969f169ad184c06e2ca10bd6aa7a81/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 13:42:58 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0216af483d77ec471622a589be635ab969f169ad184c06e2ca10bd6aa7a81/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 24 13:42:58 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0216af483d77ec471622a589be635ab969f169ad184c06e2ca10bd6aa7a81/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 24 13:42:58 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0216af483d77ec471622a589be635ab969f169ad184c06e2ca10bd6aa7a81/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 24 13:42:58 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0216af483d77ec471622a589be635ab969f169ad184c06e2ca10bd6aa7a81/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 13:42:58 np0005533938 podman[269758]: 2025-11-24 18:42:58.877143606 +0000 UTC m=+0.093701711 container init 8bfccfbfd425066c99ff87323aabc0b5530ac34ab41ffeb77e223003743eba60 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 24 13:42:58 np0005533938 podman[269758]: 2025-11-24 18:42:58.884863828 +0000 UTC m=+0.101421903 container start 8bfccfbfd425066c99ff87323aabc0b5530ac34ab41ffeb77e223003743eba60 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_id=edpm, container_name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 13:42:58 np0005533938 podman[269758]: nova_compute
Nov 24 13:42:58 np0005533938 nova_compute[269773]: + sudo -E kolla_set_configs
Nov 24 13:42:58 np0005533938 systemd[1]: Started nova_compute container.
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Validating config file
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Copying service configuration files
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Deleting /etc/ceph
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Creating directory /etc/ceph
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Setting permission for /etc/ceph
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Writing out command to execute
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 13:42:58 np0005533938 nova_compute[269773]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 13:42:58 np0005533938 nova_compute[269773]: ++ cat /run_command
Nov 24 13:42:58 np0005533938 nova_compute[269773]: + CMD=nova-compute
Nov 24 13:42:58 np0005533938 nova_compute[269773]: + ARGS=
Nov 24 13:42:58 np0005533938 nova_compute[269773]: + sudo kolla_copy_cacerts
Nov 24 13:42:58 np0005533938 nova_compute[269773]: + [[ ! -n '' ]]
Nov 24 13:42:58 np0005533938 nova_compute[269773]: + . kolla_extend_start
Nov 24 13:42:58 np0005533938 nova_compute[269773]: + echo 'Running command: '\''nova-compute'\'''
Nov 24 13:42:58 np0005533938 nova_compute[269773]: + umask 0022
Nov 24 13:42:58 np0005533938 nova_compute[269773]: Running command: 'nova-compute'
Nov 24 13:42:58 np0005533938 nova_compute[269773]: + exec nova-compute
Nov 24 13:42:59 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:42:59 np0005533938 python3.9[269934]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:43:00 np0005533938 python3.9[270085]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:43:01 np0005533938 nova_compute[269773]: 2025-11-24 18:43:01.451 269777 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 24 13:43:01 np0005533938 nova_compute[269773]: 2025-11-24 18:43:01.451 269777 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 24 13:43:01 np0005533938 nova_compute[269773]: 2025-11-24 18:43:01.451 269777 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 24 13:43:01 np0005533938 nova_compute[269773]: 2025-11-24 18:43:01.451 269777 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 24 13:43:01 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:01 np0005533938 python3.9[270235]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 24 13:43:01 np0005533938 nova_compute[269773]: 2025-11-24 18:43:01.598 269777 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:43:01 np0005533938 nova_compute[269773]: 2025-11-24 18:43:01.632 269777 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:43:01 np0005533938 nova_compute[269773]: 2025-11-24 18:43:01.632 269777 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.241 269777 INFO nova.virt.driver [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.462 269777 INFO nova.compute.provider_config [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.481 269777 DEBUG oslo_concurrency.lockutils [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.481 269777 DEBUG oslo_concurrency.lockutils [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.481 269777 DEBUG oslo_concurrency.lockutils [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.482 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.482 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.482 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.482 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.482 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.482 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.483 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.483 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.483 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.483 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.483 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.484 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.484 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.484 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.484 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.484 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.484 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.484 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.485 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.485 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.485 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.485 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.485 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.485 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.485 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.486 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.486 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.486 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.486 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.486 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.486 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.487 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.487 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.487 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.487 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.487 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.487 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.487 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.488 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.488 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.488 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.488 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.488 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.489 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.489 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.489 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.489 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.489 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.489 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.489 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.490 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.490 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.490 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.490 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.490 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.490 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.491 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.491 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.491 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.491 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.491 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.491 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.491 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.491 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.492 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.492 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.492 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.492 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.492 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.492 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.493 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.493 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.493 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.493 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.493 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.493 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.494 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.494 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.494 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.494 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.494 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.494 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.495 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.495 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.495 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.495 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.495 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.495 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.496 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.496 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.496 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.496 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.496 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.496 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.497 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.497 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.497 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.497 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.497 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.497 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.498 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.498 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.498 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.498 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.498 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.498 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.498 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.499 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.499 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.499 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.499 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.499 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.500 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.500 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.500 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.500 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.500 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.500 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.501 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.501 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.501 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.501 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.501 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.501 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.502 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.502 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.502 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.502 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.502 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.502 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.502 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.503 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.503 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.503 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.503 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.503 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.503 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.503 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.504 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.504 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.504 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.504 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.504 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.504 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.505 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.505 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.505 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.505 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.505 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.505 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.506 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.506 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.506 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.506 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.506 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.506 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.506 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.507 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.507 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.507 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.507 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.507 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.507 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.508 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.508 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.508 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.508 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.508 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.508 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.509 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.509 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.509 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.509 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.509 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.509 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.510 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.510 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.510 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.510 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.510 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.511 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.511 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.511 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.511 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.511 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.511 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.512 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.512 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.512 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.512 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.512 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.513 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.513 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.513 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.513 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.513 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.513 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.514 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.514 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.514 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.514 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.514 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.514 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.514 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.515 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.515 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.515 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.515 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.515 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.516 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.516 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.516 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.516 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.516 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.516 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.517 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.517 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.517 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.517 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.517 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.517 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.517 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.518 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.518 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.518 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.518 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.518 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.518 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.518 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.519 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.519 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.519 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.519 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.519 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.519 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.520 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.520 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.520 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.520 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.521 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.521 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.521 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.521 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.521 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.521 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.521 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.522 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.522 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.522 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.522 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.522 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.522 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.522 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.522 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.523 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.523 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.523 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.523 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.523 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.524 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.524 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.524 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.524 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.524 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.524 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.525 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.525 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.525 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.525 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.525 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.525 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.525 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.526 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.526 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.526 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.526 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.526 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.526 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.526 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.527 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.527 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.527 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.527 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.527 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.528 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.528 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.528 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.528 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.528 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.529 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.529 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.529 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.529 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.529 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.530 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.530 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.530 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.530 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.530 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.531 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.531 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.531 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.531 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.531 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.532 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.532 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.532 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.532 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.532 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.532 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.533 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.533 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.533 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.533 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.533 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.534 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.534 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.534 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.534 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.534 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.534 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.535 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.535 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.535 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.535 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.535 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.535 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.536 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.536 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.536 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.536 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.536 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.536 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.537 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.537 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.537 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.537 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.537 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.537 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.538 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.538 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.538 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.538 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.538 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.538 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.539 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.539 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.539 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.539 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.539 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.539 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.539 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.539 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.540 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.540 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.540 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.540 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.540 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.541 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.541 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.541 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.541 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.541 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.541 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.541 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.542 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.542 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.542 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.542 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.542 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.542 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.542 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.543 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.543 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.543 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.543 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.543 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.543 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.543 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.544 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.544 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.544 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.544 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.544 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.544 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.545 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.545 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.545 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.545 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.545 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.545 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.545 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.546 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.546 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.546 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.546 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.546 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.546 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.546 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.547 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.547 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.547 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.547 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.547 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.547 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.547 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.548 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.548 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.548 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.548 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.548 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.548 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.548 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.549 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.549 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.549 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.549 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.549 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.549 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.550 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.550 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.550 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.550 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.550 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.550 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.550 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.551 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.551 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.551 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.551 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.551 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.551 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.551 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.552 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.552 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.552 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.552 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.552 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.552 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.552 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.553 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.553 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.553 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.553 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.553 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.553 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.554 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.554 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.554 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.554 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.554 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.554 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.554 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.555 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.555 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.555 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.555 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.555 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.555 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.556 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.556 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.556 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.556 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.556 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.556 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.556 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.557 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.557 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.557 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.557 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.557 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.557 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.557 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.558 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.558 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.558 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.558 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.558 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.558 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.558 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.559 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.559 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.559 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.559 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.559 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.559 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.560 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.560 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.560 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.560 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.560 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.560 269777 WARNING oslo_config.cfg [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 24 13:43:02 np0005533938 nova_compute[269773]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 24 13:43:02 np0005533938 nova_compute[269773]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 24 13:43:02 np0005533938 nova_compute[269773]: and ``live_migration_inbound_addr`` respectively.
Nov 24 13:43:02 np0005533938 nova_compute[269773]: ).  Its value may be silently ignored in the future.#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.561 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.561 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.561 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.561 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.561 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.561 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.562 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.562 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.562 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.562 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.562 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.562 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.562 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.563 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.563 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.563 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.563 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.563 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.563 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.rbd_secret_uuid        = e5ee928f-099b-569b-93c9-ecf025cbb50d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.564 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.564 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.564 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.564 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.564 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.564 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.564 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.565 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.565 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.565 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.565 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.565 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.565 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.566 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.566 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.566 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.566 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.566 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.566 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.566 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.567 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.567 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.567 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.567 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.567 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.568 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.568 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.568 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.568 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.568 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.568 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.568 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.569 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.569 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.569 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.569 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.569 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.569 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.570 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.570 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.570 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.570 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.570 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.570 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.570 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.571 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.571 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.571 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.571 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.571 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.571 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.572 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.572 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.572 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.572 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.572 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.572 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.572 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.573 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.573 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.573 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.573 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.573 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.573 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.573 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.574 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.574 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.574 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.574 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.574 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.574 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.574 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.575 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.575 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.575 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.575 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.575 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.575 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.575 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.576 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.576 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.576 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.576 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.576 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.576 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.577 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.577 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.577 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.577 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.577 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.577 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.577 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.578 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.578 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.578 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.578 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.578 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.578 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.578 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.578 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.579 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.579 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.579 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.579 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.579 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.579 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.579 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.580 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.580 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.580 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.580 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.580 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.580 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.580 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.581 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.581 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.581 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.581 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.581 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.581 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.582 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.582 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.582 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.582 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.582 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.582 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.582 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.583 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.583 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.583 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.583 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.583 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.583 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.583 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.584 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.584 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.584 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.584 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.584 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.584 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.585 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.585 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.585 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.585 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.585 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.585 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.585 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.586 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.586 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.586 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.586 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.586 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.586 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.586 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.586 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.587 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.587 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.587 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.587 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.587 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.587 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.588 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.588 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.588 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.588 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.588 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.588 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.588 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.589 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.589 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.589 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.589 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.589 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.589 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.589 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.590 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.590 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.590 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.590 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.590 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.590 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.591 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.591 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.591 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.591 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.591 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.591 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.591 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.592 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.592 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.592 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.592 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.592 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.592 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.592 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.593 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.593 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.593 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.593 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.593 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.593 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.593 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.594 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.594 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.594 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.594 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.594 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.594 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.594 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.595 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.595 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.595 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.595 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.595 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.595 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.595 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.596 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.596 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.596 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.596 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.596 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.596 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.597 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.597 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.597 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.597 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.597 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.597 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.597 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.598 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.598 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.598 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.598 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.598 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.598 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.599 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.599 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.599 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.599 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.599 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.599 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.599 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.600 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.600 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.600 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.600 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.600 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.600 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.601 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.601 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.601 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.601 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.601 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.601 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.601 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.602 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.602 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.602 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.602 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.602 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.602 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.602 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.603 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.603 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.603 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.603 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.603 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.603 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.603 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.604 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.604 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.604 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.604 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.604 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.604 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.605 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.605 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.605 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.605 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.605 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.605 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.605 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.606 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.606 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.606 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.606 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.606 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.606 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.606 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.607 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.607 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.607 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.607 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.607 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.607 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.607 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.608 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.608 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.608 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.608 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.608 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.608 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.608 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.609 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.609 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.609 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.609 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.609 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.609 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.609 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.610 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.610 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.610 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.610 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.610 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.610 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.610 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.611 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.611 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.611 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.611 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.611 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.611 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.611 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.612 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.612 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.612 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.612 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.612 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.612 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.613 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.613 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.613 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.613 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.613 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.613 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.613 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.613 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.614 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.614 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.614 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.614 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.614 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.614 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.614 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.615 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.615 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.615 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.615 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.615 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.615 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.615 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.616 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.616 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.616 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.616 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.616 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 python3.9[270391]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.616 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.617 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.617 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.617 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.617 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.617 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.617 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.617 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.618 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.618 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.618 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.618 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.618 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.618 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.619 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.619 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.619 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.619 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.619 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.619 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.619 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.620 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.620 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.620 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.620 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.620 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.620 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.620 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.621 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.621 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.621 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.621 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.621 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.621 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.621 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.622 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.622 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.622 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.622 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.622 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.622 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.622 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.623 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.623 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.623 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.623 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.623 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.623 269777 DEBUG oslo_service.service [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.624 269777 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 24 13:43:02 np0005533938 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.641 269777 DEBUG nova.virt.libvirt.host [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.642 269777 DEBUG nova.virt.libvirt.host [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.642 269777 DEBUG nova.virt.libvirt.host [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.642 269777 DEBUG nova.virt.libvirt.host [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 24 13:43:02 np0005533938 systemd[1]: Starting libvirt QEMU daemon...
Nov 24 13:43:02 np0005533938 systemd[1]: Started libvirt QEMU daemon.
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.733 269777 DEBUG nova.virt.libvirt.host [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fd49633f910> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.736 269777 DEBUG nova.virt.libvirt.host [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fd49633f910> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.736 269777 INFO nova.virt.libvirt.driver [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.754 269777 WARNING nova.virt.libvirt.driver [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 24 13:43:02 np0005533938 nova_compute[269773]: 2025-11-24 18:43:02.754 269777 DEBUG nova.virt.libvirt.volume.mount [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 24 13:43:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:43:03 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:03 np0005533938 nova_compute[269773]: 2025-11-24 18:43:03.637 269777 INFO nova.virt.libvirt.host [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Libvirt host capabilities <capabilities>
Nov 24 13:43:03 np0005533938 nova_compute[269773]: 
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <host>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <uuid>ce8f254e-4b98-4140-abc7-8040b35476ad</uuid>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <cpu>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <arch>x86_64</arch>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model>EPYC-Rome-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <vendor>AMD</vendor>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <microcode version='16777317'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <signature family='23' model='49' stepping='0'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature name='x2apic'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature name='tsc-deadline'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature name='osxsave'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature name='hypervisor'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature name='tsc_adjust'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature name='spec-ctrl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature name='stibp'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature name='arch-capabilities'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature name='ssbd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature name='cmp_legacy'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature name='topoext'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature name='virt-ssbd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature name='lbrv'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature name='tsc-scale'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature name='vmcb-clean'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature name='pause-filter'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature name='pfthreshold'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature name='svme-addr-chk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature name='rdctl-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature name='skip-l1dfl-vmentry'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature name='mds-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature name='pschange-mc-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <pages unit='KiB' size='4'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <pages unit='KiB' size='2048'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <pages unit='KiB' size='1048576'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </cpu>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <power_management>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <suspend_mem/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </power_management>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <iommu support='no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <migration_features>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <live/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <uri_transports>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <uri_transport>tcp</uri_transport>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <uri_transport>rdma</uri_transport>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </uri_transports>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </migration_features>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <topology>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <cells num='1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <cell id='0'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:          <memory unit='KiB'>7864320</memory>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:          <pages unit='KiB' size='4'>1966080</pages>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:          <pages unit='KiB' size='2048'>0</pages>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:          <distances>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:            <sibling id='0' value='10'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:          </distances>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:          <cpus num='8'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:          </cpus>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        </cell>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </cells>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </topology>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <cache>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </cache>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <secmodel>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model>selinux</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <doi>0</doi>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </secmodel>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <secmodel>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model>dac</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <doi>0</doi>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </secmodel>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  </host>
Nov 24 13:43:03 np0005533938 nova_compute[269773]: 
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <guest>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <os_type>hvm</os_type>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <arch name='i686'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <wordsize>32</wordsize>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <domain type='qemu'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <domain type='kvm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </arch>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <features>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <pae/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <nonpae/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <acpi default='on' toggle='yes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <apic default='on' toggle='no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <cpuselection/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <deviceboot/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <disksnapshot default='on' toggle='no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <externalSnapshot/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </features>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  </guest>
Nov 24 13:43:03 np0005533938 nova_compute[269773]: 
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <guest>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <os_type>hvm</os_type>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <arch name='x86_64'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <wordsize>64</wordsize>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <domain type='qemu'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <domain type='kvm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </arch>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <features>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <acpi default='on' toggle='yes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <apic default='on' toggle='no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <cpuselection/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <deviceboot/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <disksnapshot default='on' toggle='no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <externalSnapshot/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </features>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  </guest>
Nov 24 13:43:03 np0005533938 nova_compute[269773]: 
Nov 24 13:43:03 np0005533938 nova_compute[269773]: </capabilities>
Nov 24 13:43:03 np0005533938 nova_compute[269773]: #033[00m
Nov 24 13:43:03 np0005533938 python3.9[270626]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 24 13:43:03 np0005533938 nova_compute[269773]: 2025-11-24 18:43:03.650 269777 DEBUG nova.virt.libvirt.host [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 24 13:43:03 np0005533938 nova_compute[269773]: 2025-11-24 18:43:03.672 269777 DEBUG nova.virt.libvirt.host [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 24 13:43:03 np0005533938 nova_compute[269773]: <domainCapabilities>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <path>/usr/libexec/qemu-kvm</path>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <domain>kvm</domain>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <arch>i686</arch>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <vcpu max='4096'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <iothreads supported='yes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <os supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <enum name='firmware'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <loader supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='type'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>rom</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>pflash</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='readonly'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>yes</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>no</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='secure'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>no</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </loader>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  </os>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <cpu>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <mode name='host-passthrough' supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='hostPassthroughMigratable'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>on</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>off</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </mode>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <mode name='maximum' supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='maximumMigratable'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>on</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>off</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </mode>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <mode name='host-model' supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <vendor>AMD</vendor>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='x2apic'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='tsc-deadline'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='hypervisor'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='tsc_adjust'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='spec-ctrl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='stibp'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='ssbd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='cmp_legacy'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='overflow-recov'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='succor'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='ibrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='amd-ssbd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='virt-ssbd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='lbrv'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='tsc-scale'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='vmcb-clean'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='flushbyasid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='pause-filter'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='pfthreshold'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='svme-addr-chk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='disable' name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </mode>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <mode name='custom' supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Broadwell'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Broadwell-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Broadwell-noTSX'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Broadwell-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Broadwell-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Broadwell-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Broadwell-v4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cascadelake-Server'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cascadelake-Server-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cascadelake-Server-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cascadelake-Server-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cascadelake-Server-v4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cascadelake-Server-v5'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cooperlake'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cooperlake-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cooperlake-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Denverton'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mpx'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Denverton-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mpx'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Denverton-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Denverton-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Dhyana-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Genoa'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amd-psfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='auto-ibrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='no-nested-data-bp'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='null-sel-clr-base'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='stibp-always-on'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Genoa-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amd-psfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='auto-ibrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='no-nested-data-bp'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='null-sel-clr-base'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='stibp-always-on'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Milan'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Milan-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Milan-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amd-psfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='no-nested-data-bp'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='null-sel-clr-base'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='stibp-always-on'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Rome'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Rome-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Rome-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Rome-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-v4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='GraniteRapids'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-tile'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fbsdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrc'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fzrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mcdt-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pbrsb-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='prefetchiti'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='psdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='GraniteRapids-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-tile'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fbsdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrc'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fzrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mcdt-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pbrsb-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='prefetchiti'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='psdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='GraniteRapids-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-tile'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx10'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx10-128'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx10-256'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx10-512'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cldemote'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fbsdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrc'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fzrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mcdt-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdir64b'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdiri'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pbrsb-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='prefetchiti'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='psdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Haswell'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Haswell-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Haswell-noTSX'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Haswell-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Haswell-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Haswell-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Haswell-v4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server-noTSX'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server-v4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server-v5'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server-v6'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server-v7'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='IvyBridge'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='IvyBridge-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='IvyBridge-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='IvyBridge-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='KnightsMill'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-4fmaps'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-4vnniw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512er'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512pf'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='KnightsMill-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-4fmaps'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-4vnniw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512er'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512pf'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Opteron_G4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fma4'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xop'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Opteron_G4-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fma4'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xop'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Opteron_G5'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fma4'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tbm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xop'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Opteron_G5-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fma4'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tbm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xop'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='SapphireRapids'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-tile'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrc'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fzrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='SapphireRapids-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-tile'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrc'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fzrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='SapphireRapids-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-tile'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fbsdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrc'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fzrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='psdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='SapphireRapids-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-tile'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cldemote'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fbsdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrc'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fzrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdir64b'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdiri'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='psdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='SierraForest'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-ne-convert'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cmpccxadd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fbsdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mcdt-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pbrsb-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='psdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='SierraForest-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-ne-convert'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cmpccxadd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fbsdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mcdt-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pbrsb-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='psdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Client'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Client-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Client-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Client-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Client-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Client-v4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Server'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Server-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 systemd[1]: Stopping nova_compute container...
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Server-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Server-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Server-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Server-v4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Server-v5'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Snowridge'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cldemote'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='core-capability'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdir64b'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdiri'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mpx'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='split-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Snowridge-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cldemote'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='core-capability'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdir64b'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdiri'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mpx'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='split-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Snowridge-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cldemote'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='core-capability'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdir64b'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdiri'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='split-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Snowridge-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cldemote'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='core-capability'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdir64b'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdiri'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='split-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Snowridge-v4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cldemote'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdir64b'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdiri'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='athlon'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='3dnow'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='3dnowext'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='athlon-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='3dnow'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='3dnowext'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='core2duo'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='core2duo-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='coreduo'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='coreduo-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='n270'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='n270-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='phenom'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='3dnow'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='3dnowext'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='phenom-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='3dnow'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='3dnowext'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </mode>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  </cpu>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <memoryBacking supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <enum name='sourceType'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <value>file</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <value>anonymous</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <value>memfd</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  </memoryBacking>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <devices>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <disk supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='diskDevice'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>disk</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>cdrom</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>floppy</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>lun</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='bus'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>fdc</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>scsi</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtio</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>usb</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>sata</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='model'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtio</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtio-transitional</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtio-non-transitional</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </disk>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <graphics supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='type'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>vnc</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>egl-headless</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>dbus</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </graphics>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <video supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='modelType'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>vga</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>cirrus</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtio</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>none</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>bochs</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>ramfb</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </video>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <hostdev supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='mode'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>subsystem</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='startupPolicy'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>default</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>mandatory</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>requisite</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>optional</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='subsysType'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>usb</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>pci</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>scsi</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='capsType'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='pciBackend'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </hostdev>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <rng supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='model'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtio</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtio-transitional</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtio-non-transitional</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='backendModel'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>random</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>egd</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>builtin</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </rng>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <filesystem supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='driverType'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>path</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>handle</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtiofs</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </filesystem>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <tpm supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='model'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>tpm-tis</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>tpm-crb</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='backendModel'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>emulator</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>external</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='backendVersion'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>2.0</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </tpm>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <redirdev supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='bus'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>usb</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </redirdev>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <channel supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='type'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>pty</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>unix</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </channel>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <crypto supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='model'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='type'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>qemu</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='backendModel'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>builtin</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </crypto>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <interface supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='backendType'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>default</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>passt</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </interface>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <panic supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='model'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>isa</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>hyperv</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </panic>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <console supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='type'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>null</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>vc</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>pty</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>dev</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>file</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>pipe</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>stdio</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>udp</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>tcp</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>unix</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>qemu-vdagent</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>dbus</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </console>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  </devices>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <features>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <gic supported='no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <vmcoreinfo supported='yes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <genid supported='yes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <backingStoreInput supported='yes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <backup supported='yes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <async-teardown supported='yes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <ps2 supported='yes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <sev supported='no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <sgx supported='no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <hyperv supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='features'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>relaxed</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>vapic</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>spinlocks</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>vpindex</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>runtime</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>synic</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>stimer</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>reset</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>vendor_id</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>frequencies</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>reenlightenment</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>tlbflush</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>ipi</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>avic</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>emsr_bitmap</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>xmm_input</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <defaults>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <spinlocks>4095</spinlocks>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <stimer_direct>on</stimer_direct>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <tlbflush_direct>on</tlbflush_direct>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <tlbflush_extended>on</tlbflush_extended>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </defaults>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </hyperv>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <launchSecurity supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='sectype'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>tdx</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </launchSecurity>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  </features>
Nov 24 13:43:03 np0005533938 nova_compute[269773]: </domainCapabilities>
Nov 24 13:43:03 np0005533938 nova_compute[269773]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 24 13:43:03 np0005533938 nova_compute[269773]: 2025-11-24 18:43:03.685 269777 DEBUG nova.virt.libvirt.host [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 24 13:43:03 np0005533938 nova_compute[269773]: <domainCapabilities>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <path>/usr/libexec/qemu-kvm</path>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <domain>kvm</domain>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <arch>i686</arch>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <vcpu max='240'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <iothreads supported='yes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <os supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <enum name='firmware'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <loader supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='type'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>rom</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>pflash</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='readonly'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>yes</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>no</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='secure'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>no</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </loader>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  </os>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <cpu>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <mode name='host-passthrough' supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='hostPassthroughMigratable'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>on</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>off</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </mode>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <mode name='maximum' supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='maximumMigratable'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>on</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>off</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </mode>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <mode name='host-model' supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <vendor>AMD</vendor>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='x2apic'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='tsc-deadline'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='hypervisor'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='tsc_adjust'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='spec-ctrl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='stibp'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='ssbd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='cmp_legacy'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='overflow-recov'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='succor'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='ibrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='amd-ssbd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='virt-ssbd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='lbrv'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='tsc-scale'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='vmcb-clean'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='flushbyasid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='pause-filter'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='pfthreshold'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='svme-addr-chk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='disable' name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </mode>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <mode name='custom' supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Broadwell'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Broadwell-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Broadwell-noTSX'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Broadwell-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Broadwell-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Broadwell-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Broadwell-v4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cascadelake-Server'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cascadelake-Server-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cascadelake-Server-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cascadelake-Server-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cascadelake-Server-v4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cascadelake-Server-v5'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cooperlake'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cooperlake-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cooperlake-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Denverton'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mpx'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Denverton-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mpx'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Denverton-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Denverton-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Dhyana-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Genoa'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amd-psfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='auto-ibrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='no-nested-data-bp'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='null-sel-clr-base'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='stibp-always-on'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Genoa-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amd-psfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='auto-ibrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='no-nested-data-bp'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='null-sel-clr-base'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='stibp-always-on'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Milan'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Milan-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Milan-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amd-psfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='no-nested-data-bp'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='null-sel-clr-base'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='stibp-always-on'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Rome'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Rome-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Rome-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Rome-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-v4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='GraniteRapids'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-tile'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fbsdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrc'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fzrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mcdt-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pbrsb-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='prefetchiti'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='psdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='GraniteRapids-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-tile'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fbsdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrc'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fzrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mcdt-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pbrsb-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='prefetchiti'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='psdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='GraniteRapids-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-tile'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx10'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx10-128'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx10-256'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx10-512'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cldemote'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fbsdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrc'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fzrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mcdt-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdir64b'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdiri'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pbrsb-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='prefetchiti'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='psdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Haswell'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Haswell-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Haswell-noTSX'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Haswell-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Haswell-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Haswell-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Haswell-v4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server-noTSX'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server-v4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server-v5'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server-v6'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server-v7'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='IvyBridge'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='IvyBridge-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='IvyBridge-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='IvyBridge-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='KnightsMill'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-4fmaps'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-4vnniw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512er'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512pf'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='KnightsMill-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-4fmaps'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-4vnniw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512er'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512pf'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Opteron_G4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fma4'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xop'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Opteron_G4-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fma4'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xop'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Opteron_G5'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fma4'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tbm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xop'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Opteron_G5-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fma4'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tbm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xop'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='SapphireRapids'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-tile'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrc'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fzrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='SapphireRapids-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-tile'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrc'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fzrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='SapphireRapids-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-tile'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fbsdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrc'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fzrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='psdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='SapphireRapids-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-tile'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cldemote'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fbsdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrc'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fzrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdir64b'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdiri'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='psdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='SierraForest'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-ne-convert'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cmpccxadd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fbsdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mcdt-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pbrsb-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='psdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='SierraForest-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-ne-convert'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cmpccxadd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fbsdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mcdt-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pbrsb-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='psdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Client'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Client-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Client-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Client-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Client-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Client-v4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Server'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Server-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Server-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Server-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Server-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Server-v4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Server-v5'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Snowridge'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cldemote'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='core-capability'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdir64b'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdiri'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mpx'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='split-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Snowridge-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cldemote'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='core-capability'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdir64b'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdiri'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mpx'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='split-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Snowridge-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cldemote'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='core-capability'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdir64b'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdiri'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='split-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Snowridge-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cldemote'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='core-capability'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdir64b'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdiri'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='split-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Snowridge-v4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cldemote'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdir64b'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdiri'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='athlon'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='3dnow'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='3dnowext'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='athlon-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='3dnow'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='3dnowext'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='core2duo'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='core2duo-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='coreduo'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='coreduo-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='n270'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='n270-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='phenom'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='3dnow'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='3dnowext'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='phenom-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='3dnow'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='3dnowext'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </mode>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  </cpu>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <memoryBacking supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <enum name='sourceType'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <value>file</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <value>anonymous</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <value>memfd</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  </memoryBacking>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <devices>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <disk supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='diskDevice'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>disk</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>cdrom</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>floppy</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>lun</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='bus'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>ide</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>fdc</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>scsi</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtio</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>usb</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>sata</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='model'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtio</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtio-transitional</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtio-non-transitional</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </disk>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <graphics supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='type'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>vnc</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>egl-headless</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>dbus</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </graphics>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <video supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='modelType'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>vga</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>cirrus</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtio</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>none</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>bochs</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>ramfb</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </video>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <hostdev supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='mode'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>subsystem</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='startupPolicy'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>default</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>mandatory</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>requisite</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>optional</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='subsysType'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>usb</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>pci</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>scsi</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='capsType'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='pciBackend'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </hostdev>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <rng supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='model'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtio</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtio-transitional</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtio-non-transitional</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='backendModel'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>random</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>egd</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>builtin</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </rng>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <filesystem supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='driverType'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>path</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>handle</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtiofs</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </filesystem>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <tpm supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='model'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>tpm-tis</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>tpm-crb</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='backendModel'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>emulator</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>external</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='backendVersion'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>2.0</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </tpm>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <redirdev supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='bus'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>usb</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </redirdev>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <channel supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='type'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>pty</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>unix</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </channel>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <crypto supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='model'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='type'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>qemu</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='backendModel'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>builtin</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </crypto>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <interface supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='backendType'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>default</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>passt</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </interface>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <panic supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='model'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>isa</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>hyperv</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </panic>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <console supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='type'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>null</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>vc</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>pty</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>dev</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>file</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>pipe</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>stdio</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>udp</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>tcp</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>unix</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>qemu-vdagent</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>dbus</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </console>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  </devices>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <features>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <gic supported='no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <vmcoreinfo supported='yes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <genid supported='yes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <backingStoreInput supported='yes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <backup supported='yes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <async-teardown supported='yes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <ps2 supported='yes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <sev supported='no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <sgx supported='no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <hyperv supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='features'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>relaxed</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>vapic</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>spinlocks</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>vpindex</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>runtime</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>synic</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>stimer</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>reset</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>vendor_id</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>frequencies</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>reenlightenment</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>tlbflush</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>ipi</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>avic</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>emsr_bitmap</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>xmm_input</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <defaults>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <spinlocks>4095</spinlocks>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <stimer_direct>on</stimer_direct>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <tlbflush_direct>on</tlbflush_direct>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <tlbflush_extended>on</tlbflush_extended>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </defaults>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </hyperv>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <launchSecurity supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='sectype'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>tdx</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </launchSecurity>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  </features>
Nov 24 13:43:03 np0005533938 nova_compute[269773]: </domainCapabilities>
Nov 24 13:43:03 np0005533938 nova_compute[269773]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 24 13:43:03 np0005533938 nova_compute[269773]: 2025-11-24 18:43:03.707 269777 DEBUG nova.virt.libvirt.host [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 24 13:43:03 np0005533938 nova_compute[269773]: 2025-11-24 18:43:03.712 269777 DEBUG nova.virt.libvirt.host [None req-a82e4c6c-504a-48f6-860b-ffeac4708421 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 24 13:43:03 np0005533938 nova_compute[269773]: <domainCapabilities>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <path>/usr/libexec/qemu-kvm</path>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <domain>kvm</domain>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <arch>x86_64</arch>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <vcpu max='4096'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <iothreads supported='yes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <os supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <enum name='firmware'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <value>efi</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <loader supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='type'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>rom</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>pflash</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='readonly'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>yes</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>no</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='secure'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>yes</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>no</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </loader>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  </os>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <cpu>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <mode name='host-passthrough' supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='hostPassthroughMigratable'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>on</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>off</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </mode>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <mode name='maximum' supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='maximumMigratable'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>on</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>off</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </mode>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <mode name='host-model' supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <vendor>AMD</vendor>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='x2apic'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='tsc-deadline'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='hypervisor'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='tsc_adjust'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='spec-ctrl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='stibp'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='ssbd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='cmp_legacy'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='overflow-recov'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='succor'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='ibrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='amd-ssbd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='virt-ssbd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='lbrv'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='tsc-scale'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='vmcb-clean'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='flushbyasid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='pause-filter'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='pfthreshold'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='svme-addr-chk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <feature policy='disable' name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </mode>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <mode name='custom' supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Broadwell'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Broadwell-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Broadwell-noTSX'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Broadwell-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Broadwell-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Broadwell-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Broadwell-v4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cascadelake-Server'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cascadelake-Server-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cascadelake-Server-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cascadelake-Server-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cascadelake-Server-v4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cascadelake-Server-v5'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cooperlake'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cooperlake-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Cooperlake-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Denverton'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mpx'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Denverton-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mpx'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Denverton-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Denverton-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Dhyana-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Genoa'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amd-psfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='auto-ibrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='no-nested-data-bp'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='null-sel-clr-base'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='stibp-always-on'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Genoa-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amd-psfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='auto-ibrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='no-nested-data-bp'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='null-sel-clr-base'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='stibp-always-on'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Milan'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Milan-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Milan-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amd-psfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='no-nested-data-bp'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='null-sel-clr-base'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='stibp-always-on'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Rome'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Rome-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Rome-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-Rome-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='EPYC-v4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='GraniteRapids'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-tile'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fbsdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrc'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fzrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mcdt-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pbrsb-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='prefetchiti'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='psdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='GraniteRapids-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-tile'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fbsdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrc'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fzrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mcdt-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pbrsb-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='prefetchiti'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='psdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='GraniteRapids-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-tile'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx10'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx10-128'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx10-256'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx10-512'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cldemote'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fbsdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrc'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fzrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mcdt-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdir64b'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdiri'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pbrsb-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='prefetchiti'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='psdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Haswell'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Haswell-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Haswell-noTSX'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Haswell-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Haswell-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Haswell-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Haswell-v4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server-noTSX'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server-v4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server-v5'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server-v6'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Icelake-Server-v7'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='IvyBridge'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='IvyBridge-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='IvyBridge-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='IvyBridge-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='KnightsMill'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-4fmaps'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-4vnniw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512er'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512pf'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='KnightsMill-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-4fmaps'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-4vnniw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512er'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512pf'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Opteron_G4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fma4'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xop'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Opteron_G4-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fma4'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xop'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Opteron_G5'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fma4'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tbm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xop'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Opteron_G5-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fma4'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tbm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xop'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='SapphireRapids'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-tile'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrc'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fzrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='SapphireRapids-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-tile'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrc'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fzrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='SapphireRapids-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-tile'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fbsdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrc'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fzrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='psdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='SapphireRapids-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='amx-tile'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-bf16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-fp16'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bitalg'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cldemote'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fbsdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrc'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fzrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='la57'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdir64b'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdiri'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='psdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='taa-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xfd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='SierraForest'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-ne-convert'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cmpccxadd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fbsdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mcdt-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pbrsb-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='psdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='SierraForest-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-ifma'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-ne-convert'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx-vnni-int8'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cmpccxadd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fbsdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='fsrs'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ibrs-all'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mcdt-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pbrsb-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='psdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='serialize'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vaes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Client'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Client-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Client-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Client-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Client-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Client-v4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Server'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Server-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Server-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Server-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='hle'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='rtm'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Server-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Server-v4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Skylake-Server-v5'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512bw'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512cd'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512dq'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512f'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='avx512vl'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='invpcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pcid'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='pku'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Snowridge'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cldemote'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='core-capability'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdir64b'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdiri'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mpx'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='split-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Snowridge-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cldemote'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='core-capability'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdir64b'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdiri'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='mpx'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='split-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Snowridge-v2'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cldemote'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='core-capability'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdir64b'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdiri'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='split-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Snowridge-v3'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cldemote'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='core-capability'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdir64b'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdiri'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='split-lock-detect'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='Snowridge-v4'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='cldemote'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='erms'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='gfni'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdir64b'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='movdiri'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='xsaves'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='athlon'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='3dnow'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='3dnowext'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='athlon-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='3dnow'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='3dnowext'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='core2duo'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='core2duo-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='coreduo'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='coreduo-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='n270'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='n270-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='ss'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='phenom'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='3dnow'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='3dnowext'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <blockers model='phenom-v1'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='3dnow'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <feature name='3dnowext'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </blockers>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </mode>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  </cpu>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <memoryBacking supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <enum name='sourceType'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <value>file</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <value>anonymous</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <value>memfd</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  </memoryBacking>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <devices>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <disk supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='diskDevice'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>disk</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>cdrom</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>floppy</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>lun</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='bus'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>fdc</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>scsi</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtio</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>usb</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>sata</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='model'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtio</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtio-transitional</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtio-non-transitional</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </disk>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <graphics supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='type'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>vnc</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>egl-headless</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>dbus</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </graphics>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <video supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='modelType'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>vga</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>cirrus</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtio</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>none</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>bochs</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>ramfb</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </video>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <hostdev supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='mode'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>subsystem</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='startupPolicy'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>default</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>mandatory</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>requisite</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>optional</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='subsysType'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>usb</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>pci</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>scsi</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='capsType'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='pciBackend'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </hostdev>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <rng supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='model'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtio</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtio-transitional</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtio-non-transitional</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='backendModel'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>random</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>egd</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>builtin</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </rng>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <filesystem supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='driverType'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>path</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>handle</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>virtiofs</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </filesystem>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <tpm supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='model'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>tpm-tis</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>tpm-crb</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='backendModel'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>emulator</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>external</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='backendVersion'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>2.0</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </tpm>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <redirdev supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='bus'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>usb</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </redirdev>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <channel supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='type'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>pty</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>unix</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </channel>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <crypto supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='model'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='type'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>qemu</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='backendModel'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>builtin</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </crypto>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <interface supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='backendType'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>default</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>passt</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </interface>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <panic supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='model'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>isa</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>hyperv</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </panic>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <console supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='type'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>null</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>vc</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>pty</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>dev</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>file</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>pipe</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>stdio</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>udp</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>tcp</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>unix</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>qemu-vdagent</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>dbus</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </console>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  </devices>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  <features>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <gic supported='no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <vmcoreinfo supported='yes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <genid supported='yes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <backingStoreInput supported='yes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <backup supported='yes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <async-teardown supported='yes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <ps2 supported='yes'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <sev supported='no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <sgx supported='no'/>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <hyperv supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='features'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>relaxed</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>vapic</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>spinlocks</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>vpindex</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>runtime</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>synic</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>stimer</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>reset</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>vendor_id</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>frequencies</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>reenlightenment</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>tlbflush</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>ipi</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>avic</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>emsr_bitmap</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>xmm_input</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <defaults>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <spinlocks>4095</spinlocks>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <stimer_direct>on</stimer_direct>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <tlbflush_direct>on</tlbflush_direct>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <tlbflush_extended>on</tlbflush_extended>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </defaults>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </hyperv>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    <launchSecurity supported='yes'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      <enum name='sectype'>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:        <value>tdx</value>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:      </enum>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:    </launchSecurity>
Nov 24 13:43:03 np0005533938 nova_compute[269773]:  </features>
Nov 24 13:43:03 np0005533938 nova_compute[269773]: </domainCapabilities>
Nov 24 13:43:03 np0005533938 nova_compute[269773]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 24 13:43:03 np0005533938 nova_compute[269773]: 2025-11-24 18:43:03.766 269777 DEBUG oslo_concurrency.lockutils [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 13:43:03 np0005533938 nova_compute[269773]: 2025-11-24 18:43:03.771 269777 DEBUG oslo_concurrency.lockutils [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 13:43:03 np0005533938 nova_compute[269773]: 2025-11-24 18:43:03.771 269777 DEBUG oslo_concurrency.lockutils [None req-47116b44-9c41-489b-9b9a-492deca71cd9 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 13:43:04 np0005533938 virtqemud[270425]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 24 13:43:04 np0005533938 virtqemud[270425]: hostname: compute-0
Nov 24 13:43:04 np0005533938 virtqemud[270425]: End of file while reading data: Input/output error
Nov 24 13:43:04 np0005533938 systemd[1]: libpod-8bfccfbfd425066c99ff87323aabc0b5530ac34ab41ffeb77e223003743eba60.scope: Deactivated successfully.
Nov 24 13:43:04 np0005533938 systemd[1]: libpod-8bfccfbfd425066c99ff87323aabc0b5530ac34ab41ffeb77e223003743eba60.scope: Consumed 3.001s CPU time.
Nov 24 13:43:04 np0005533938 podman[270634]: 2025-11-24 18:43:04.169955638 +0000 UTC m=+0.457391976 container died 8bfccfbfd425066c99ff87323aabc0b5530ac34ab41ffeb77e223003743eba60 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 13:43:04 np0005533938 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8bfccfbfd425066c99ff87323aabc0b5530ac34ab41ffeb77e223003743eba60-userdata-shm.mount: Deactivated successfully.
Nov 24 13:43:04 np0005533938 systemd[1]: var-lib-containers-storage-overlay-cfa0216af483d77ec471622a589be635ab969f169ad184c06e2ca10bd6aa7a81-merged.mount: Deactivated successfully.
Nov 24 13:43:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:43:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:43:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:43:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:43:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:43:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:43:05 np0005533938 podman[270634]: 2025-11-24 18:43:05.134524157 +0000 UTC m=+1.421960495 container cleanup 8bfccfbfd425066c99ff87323aabc0b5530ac34ab41ffeb77e223003743eba60 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 13:43:05 np0005533938 podman[270634]: nova_compute
Nov 24 13:43:05 np0005533938 podman[270665]: nova_compute
Nov 24 13:43:05 np0005533938 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 24 13:43:05 np0005533938 systemd[1]: Stopped nova_compute container.
Nov 24 13:43:05 np0005533938 systemd[1]: Starting nova_compute container...
Nov 24 13:43:05 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:43:05 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0216af483d77ec471622a589be635ab969f169ad184c06e2ca10bd6aa7a81/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 24 13:43:05 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0216af483d77ec471622a589be635ab969f169ad184c06e2ca10bd6aa7a81/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 24 13:43:05 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0216af483d77ec471622a589be635ab969f169ad184c06e2ca10bd6aa7a81/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 24 13:43:05 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0216af483d77ec471622a589be635ab969f169ad184c06e2ca10bd6aa7a81/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 24 13:43:05 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0216af483d77ec471622a589be635ab969f169ad184c06e2ca10bd6aa7a81/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 24 13:43:05 np0005533938 podman[270678]: 2025-11-24 18:43:05.322428049 +0000 UTC m=+0.092991363 container init 8bfccfbfd425066c99ff87323aabc0b5530ac34ab41ffeb77e223003743eba60 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 13:43:05 np0005533938 podman[270678]: 2025-11-24 18:43:05.334659263 +0000 UTC m=+0.105222557 container start 8bfccfbfd425066c99ff87323aabc0b5530ac34ab41ffeb77e223003743eba60 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, container_name=nova_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 13:43:05 np0005533938 podman[270678]: nova_compute
Nov 24 13:43:05 np0005533938 nova_compute[270693]: + sudo -E kolla_set_configs
Nov 24 13:43:05 np0005533938 systemd[1]: Started nova_compute container.
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Validating config file
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Copying service configuration files
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Deleting /etc/ceph
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Creating directory /etc/ceph
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Setting permission for /etc/ceph
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Writing out command to execute
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 24 13:43:05 np0005533938 nova_compute[270693]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 24 13:43:05 np0005533938 nova_compute[270693]: ++ cat /run_command
Nov 24 13:43:05 np0005533938 nova_compute[270693]: + CMD=nova-compute
Nov 24 13:43:05 np0005533938 nova_compute[270693]: + ARGS=
Nov 24 13:43:05 np0005533938 nova_compute[270693]: + sudo kolla_copy_cacerts
Nov 24 13:43:05 np0005533938 nova_compute[270693]: + [[ ! -n '' ]]
Nov 24 13:43:05 np0005533938 nova_compute[270693]: + . kolla_extend_start
Nov 24 13:43:05 np0005533938 nova_compute[270693]: Running command: 'nova-compute'
Nov 24 13:43:05 np0005533938 nova_compute[270693]: + echo 'Running command: '\''nova-compute'\'''
Nov 24 13:43:05 np0005533938 nova_compute[270693]: + umask 0022
Nov 24 13:43:05 np0005533938 nova_compute[270693]: + exec nova-compute
Nov 24 13:43:05 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:06 np0005533938 python3.9[270856]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 24 13:43:06 np0005533938 systemd[1]: Started libpod-conmon-5e27af85292a9b40c3e4241abdb9b05f5a155fee134dfa070b318e923dd00f66.scope.
Nov 24 13:43:06 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:43:06 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6825909f551c1b145e164649b5f4dc006e991842c39724bc87172ccccc24bb5f/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 24 13:43:06 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6825909f551c1b145e164649b5f4dc006e991842c39724bc87172ccccc24bb5f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 24 13:43:06 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6825909f551c1b145e164649b5f4dc006e991842c39724bc87172ccccc24bb5f/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 24 13:43:06 np0005533938 podman[270882]: 2025-11-24 18:43:06.460406981 +0000 UTC m=+0.137578072 container init 5e27af85292a9b40c3e4241abdb9b05f5a155fee134dfa070b318e923dd00f66 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute_init)
Nov 24 13:43:06 np0005533938 podman[270882]: 2025-11-24 18:43:06.469392725 +0000 UTC m=+0.146563766 container start 5e27af85292a9b40c3e4241abdb9b05f5a155fee134dfa070b318e923dd00f66 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute_init, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Nov 24 13:43:06 np0005533938 python3.9[270856]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 24 13:43:06 np0005533938 nova_compute_init[270904]: INFO:nova_statedir:Applying nova statedir ownership
Nov 24 13:43:06 np0005533938 nova_compute_init[270904]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 24 13:43:06 np0005533938 nova_compute_init[270904]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 24 13:43:06 np0005533938 nova_compute_init[270904]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 24 13:43:06 np0005533938 nova_compute_init[270904]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 24 13:43:06 np0005533938 nova_compute_init[270904]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 24 13:43:06 np0005533938 nova_compute_init[270904]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 24 13:43:06 np0005533938 nova_compute_init[270904]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 24 13:43:06 np0005533938 nova_compute_init[270904]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 24 13:43:06 np0005533938 nova_compute_init[270904]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 24 13:43:06 np0005533938 nova_compute_init[270904]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 24 13:43:06 np0005533938 nova_compute_init[270904]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 24 13:43:06 np0005533938 nova_compute_init[270904]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 24 13:43:06 np0005533938 nova_compute_init[270904]: INFO:nova_statedir:Nova statedir ownership complete
Nov 24 13:43:06 np0005533938 systemd[1]: libpod-5e27af85292a9b40c3e4241abdb9b05f5a155fee134dfa070b318e923dd00f66.scope: Deactivated successfully.
Nov 24 13:43:06 np0005533938 podman[270905]: 2025-11-24 18:43:06.547168899 +0000 UTC m=+0.041021391 container died 5e27af85292a9b40c3e4241abdb9b05f5a155fee134dfa070b318e923dd00f66 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 13:43:06 np0005533938 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5e27af85292a9b40c3e4241abdb9b05f5a155fee134dfa070b318e923dd00f66-userdata-shm.mount: Deactivated successfully.
Nov 24 13:43:06 np0005533938 systemd[1]: var-lib-containers-storage-overlay-6825909f551c1b145e164649b5f4dc006e991842c39724bc87172ccccc24bb5f-merged.mount: Deactivated successfully.
Nov 24 13:43:06 np0005533938 podman[270914]: 2025-11-24 18:43:06.62723258 +0000 UTC m=+0.077199181 container cleanup 5e27af85292a9b40c3e4241abdb9b05f5a155fee134dfa070b318e923dd00f66 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Nov 24 13:43:06 np0005533938 systemd[1]: libpod-conmon-5e27af85292a9b40c3e4241abdb9b05f5a155fee134dfa070b318e923dd00f66.scope: Deactivated successfully.
Nov 24 13:43:07 np0005533938 systemd[1]: session-52.scope: Deactivated successfully.
Nov 24 13:43:07 np0005533938 systemd[1]: session-52.scope: Consumed 2min 15.251s CPU time.
Nov 24 13:43:07 np0005533938 systemd-logind[822]: Session 52 logged out. Waiting for processes to exit.
Nov 24 13:43:07 np0005533938 systemd-logind[822]: Removed session 52.
Nov 24 13:43:07 np0005533938 nova_compute[270693]: 2025-11-24 18:43:07.357 270697 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 24 13:43:07 np0005533938 nova_compute[270693]: 2025-11-24 18:43:07.358 270697 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 24 13:43:07 np0005533938 nova_compute[270693]: 2025-11-24 18:43:07.358 270697 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 24 13:43:07 np0005533938 nova_compute[270693]: 2025-11-24 18:43:07.358 270697 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 24 13:43:07 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:07 np0005533938 nova_compute[270693]: 2025-11-24 18:43:07.494 270697 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:43:07 np0005533938 nova_compute[270693]: 2025-11-24 18:43:07.518 270697 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:43:07 np0005533938 nova_compute[270693]: 2025-11-24 18:43:07.519 270697 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 24 13:43:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.158 270697 INFO nova.virt.driver [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.279 270697 INFO nova.compute.provider_config [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.294 270697 DEBUG oslo_concurrency.lockutils [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.295 270697 DEBUG oslo_concurrency.lockutils [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.295 270697 DEBUG oslo_concurrency.lockutils [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.295 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.295 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.295 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.296 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.296 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.296 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.296 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.296 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.296 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.296 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.297 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.297 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.297 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.297 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.297 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.297 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.298 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.298 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.298 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.298 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.298 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.298 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.298 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.299 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.299 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.299 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.299 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.299 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.299 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.300 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.300 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.300 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.300 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.300 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.300 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.300 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.300 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.301 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.301 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.301 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.301 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.301 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.302 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.302 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.302 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.302 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.302 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.302 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.302 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.303 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.303 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.303 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.303 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.303 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.303 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.303 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.304 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.304 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.304 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.304 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.304 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.304 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.304 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.304 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.305 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.305 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.305 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.305 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.305 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.305 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.306 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.306 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.306 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.306 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.306 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.306 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.306 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.306 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.307 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.307 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.307 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.307 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.307 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.307 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.308 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.308 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.308 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.308 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.308 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.308 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.308 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.309 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.309 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.309 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.309 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.309 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.309 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.309 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.310 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.310 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.310 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.310 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.310 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.310 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.310 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.310 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.311 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.311 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.311 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.311 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.311 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.311 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.311 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.312 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.312 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.312 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.312 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.312 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.312 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.312 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.313 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.313 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.313 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.313 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.313 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.313 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.313 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.314 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.314 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.314 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.314 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.314 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.314 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.314 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.314 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.315 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.315 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.315 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.315 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.315 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.315 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.315 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.316 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.316 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.316 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.316 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.316 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.316 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.316 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.317 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.317 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.317 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.317 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.317 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.317 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.318 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.318 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.318 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.318 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.318 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.319 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.319 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.319 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.319 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.319 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.319 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.319 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.320 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.320 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.320 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.320 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.320 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.320 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.321 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.321 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.321 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.321 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.321 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.321 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.322 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.322 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.322 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.322 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.322 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.322 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.322 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.322 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.323 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.323 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.323 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.323 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.323 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.323 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.324 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.324 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.324 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.324 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.324 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.325 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.325 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.325 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.325 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.325 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.325 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.325 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.326 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.326 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.326 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.326 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.326 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.326 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.326 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.327 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.327 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.327 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.327 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.327 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.327 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.327 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.328 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.328 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.328 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.328 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.328 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.328 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.328 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.329 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.329 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.329 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.329 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.329 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.329 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.330 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.330 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.330 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.330 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.330 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.330 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.330 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.331 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.331 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.331 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.331 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.331 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.331 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.331 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.332 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.332 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.332 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.332 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.332 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.332 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.332 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.333 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.333 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.333 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.333 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.333 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.333 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.333 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.334 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.334 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.334 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.334 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.334 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.334 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.335 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.335 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.335 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.335 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.335 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.335 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.335 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.336 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.336 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.336 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.336 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.336 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.337 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.337 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.337 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.337 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.337 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.338 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.338 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.338 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.338 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.338 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.338 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.338 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.339 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.339 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.339 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.339 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.339 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.339 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.339 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.340 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.340 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.340 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.340 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.340 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.340 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.340 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.341 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.341 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.341 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.341 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.341 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.341 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.341 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.342 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.342 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.342 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.342 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.342 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.342 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.342 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.343 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.343 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.343 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.343 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.343 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.343 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.343 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.344 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.344 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.344 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.344 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.344 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.344 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.344 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.345 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.345 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.345 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.345 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.345 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.345 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.346 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.346 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.346 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.346 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.346 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.346 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.346 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.347 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.347 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.347 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.347 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.347 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.347 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.347 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.348 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.348 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.348 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.348 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.348 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.348 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.349 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.349 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.349 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.349 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.349 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.349 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.350 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.350 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.350 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.350 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.350 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.350 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.350 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.350 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.351 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.351 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.351 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.351 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.351 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.351 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.351 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.352 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.352 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.352 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.352 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.352 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.352 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.352 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.353 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.353 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.353 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.353 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.353 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.353 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.353 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.354 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.354 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.354 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.354 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.354 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.354 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.354 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.355 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.355 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.355 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.355 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.355 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.355 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.355 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.356 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.356 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.356 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.356 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.356 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.356 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.356 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.356 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.357 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.357 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.357 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.357 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.357 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.357 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.357 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.358 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.358 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.358 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.358 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.358 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.358 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.358 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.359 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.359 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.359 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.359 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.359 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.359 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.359 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.360 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.360 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.360 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.360 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.360 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.360 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.360 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.360 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.361 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.361 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.361 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.361 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.361 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.361 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.362 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.362 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.362 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.362 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.362 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.362 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.362 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.363 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.363 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.363 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.363 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.363 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.363 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.364 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.364 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.364 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.364 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.364 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.364 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.364 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.365 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.365 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.365 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.365 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.365 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.365 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.365 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.366 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.366 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.366 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.366 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.366 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.366 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.366 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.367 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.367 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.367 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.367 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.367 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.367 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.367 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.368 270697 WARNING oslo_config.cfg [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 24 13:43:08 np0005533938 nova_compute[270693]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 24 13:43:08 np0005533938 nova_compute[270693]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 24 13:43:08 np0005533938 nova_compute[270693]: and ``live_migration_inbound_addr`` respectively.
Nov 24 13:43:08 np0005533938 nova_compute[270693]: ).  Its value may be silently ignored in the future.#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.368 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.368 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.368 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.368 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.368 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.369 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.369 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.369 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.369 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.369 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.369 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.369 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.370 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.370 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.370 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.370 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.370 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.370 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.371 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.rbd_secret_uuid        = e5ee928f-099b-569b-93c9-ecf025cbb50d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.371 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.371 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.371 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.371 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.371 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.371 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.372 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.372 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.372 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.372 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.372 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.372 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.373 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.373 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.373 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.373 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.373 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.373 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.373 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.374 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.374 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.374 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.374 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.374 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.374 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.374 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.375 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.375 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.375 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.375 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.375 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.375 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.375 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.376 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.376 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.376 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.376 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.376 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.376 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.376 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.376 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.377 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.377 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.377 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.377 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.377 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.377 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.377 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.378 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.378 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.378 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.378 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.378 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.378 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.378 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.379 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.379 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.379 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.379 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.379 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.379 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.379 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.380 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.380 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.380 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.380 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.380 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.380 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.380 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.381 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.381 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.381 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.381 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.381 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.381 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.381 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.382 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.382 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.382 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.382 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.382 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.382 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.382 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.382 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.383 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.383 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.383 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.383 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.383 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.383 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.384 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.384 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.384 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.384 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.384 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.384 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.384 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.385 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.385 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.385 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.385 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.385 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.385 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.385 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.386 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.386 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.386 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.386 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.386 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.386 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.387 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.387 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.387 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.387 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.387 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.387 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.387 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.388 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.388 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.388 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.388 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.388 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.388 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.389 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.389 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.389 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.389 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.389 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.389 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.389 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.390 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.390 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.390 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.390 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.390 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.390 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.390 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.391 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.391 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.391 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.391 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.391 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.391 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.392 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.392 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.392 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.392 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.392 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.392 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.392 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.393 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.393 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.393 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.393 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.393 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.393 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.393 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.394 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.394 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.394 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.394 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.394 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.394 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.394 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.395 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.395 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.395 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.395 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.395 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.395 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.396 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.396 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.396 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.396 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.396 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.396 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.397 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.397 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.397 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.397 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.397 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.397 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.398 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.398 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.398 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.398 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.398 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.398 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.398 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.399 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.399 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.399 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.399 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.399 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.399 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.399 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.400 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.400 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.400 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.400 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.400 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.400 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.400 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.401 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.401 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.401 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.401 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.401 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.401 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.401 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.402 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.402 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.402 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.402 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.402 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.402 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.403 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.403 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.403 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.403 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.403 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.403 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.403 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.403 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.404 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.404 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.404 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.404 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.404 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.404 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.405 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.405 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.405 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.405 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.405 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.405 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.406 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.406 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.406 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.406 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.406 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.406 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.406 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.406 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.407 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.407 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.407 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.407 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.407 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.407 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.407 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.408 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.408 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.408 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.408 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.408 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.408 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.408 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.408 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.409 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.409 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.409 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.409 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.409 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.409 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.409 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.410 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.410 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.410 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.410 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.410 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.410 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.410 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.411 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.411 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.411 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.411 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.411 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.411 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.411 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.412 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.412 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.412 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.412 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.412 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.412 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.412 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.413 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.413 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.413 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.413 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.413 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.413 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.414 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.414 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.414 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.414 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.414 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.414 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.414 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.415 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.415 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.415 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.415 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.415 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.415 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.415 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.416 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.416 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.416 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.416 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.416 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.416 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.416 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.417 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.417 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.417 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.417 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.417 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.417 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.418 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.418 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.418 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.418 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.418 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.418 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.418 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.419 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.419 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.419 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.419 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.419 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.419 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.419 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.420 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.420 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.420 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.420 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.420 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.420 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.420 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.421 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.421 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.421 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.421 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.421 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.421 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.421 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.422 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.422 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.422 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.422 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.422 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.422 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.422 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.422 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.423 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.423 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.423 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.423 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.423 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.423 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.423 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.424 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.424 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.424 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.424 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.424 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.424 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.424 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.425 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.425 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.425 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.425 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.425 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.425 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.425 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.426 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.426 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.426 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.426 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.426 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.426 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.427 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.427 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.427 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.427 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.427 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.427 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.427 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.427 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.428 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.428 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.428 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.428 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.428 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.428 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.428 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.429 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.429 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.429 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.429 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.429 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.429 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.430 270697 DEBUG oslo_service.service [None req-71e52207-fc7a-478a-8b19-aed0e04ed50c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.431 270697 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.447 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.448 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.448 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.449 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.462 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fa09c0bfa30> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.464 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fa09c0bfa30> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.464 270697 INFO nova.virt.libvirt.driver [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.472 270697 INFO nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Libvirt host capabilities <capabilities>
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <host>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <uuid>ce8f254e-4b98-4140-abc7-8040b35476ad</uuid>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <cpu>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <arch>x86_64</arch>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model>EPYC-Rome-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <vendor>AMD</vendor>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <microcode version='16777317'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <signature family='23' model='49' stepping='0'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature name='x2apic'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature name='tsc-deadline'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature name='osxsave'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature name='hypervisor'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature name='tsc_adjust'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature name='spec-ctrl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature name='stibp'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature name='arch-capabilities'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature name='ssbd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature name='cmp_legacy'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature name='topoext'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature name='virt-ssbd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature name='lbrv'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature name='tsc-scale'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature name='vmcb-clean'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature name='pause-filter'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature name='pfthreshold'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature name='svme-addr-chk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature name='rdctl-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature name='skip-l1dfl-vmentry'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature name='mds-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature name='pschange-mc-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <pages unit='KiB' size='4'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <pages unit='KiB' size='2048'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <pages unit='KiB' size='1048576'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </cpu>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <power_management>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <suspend_mem/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </power_management>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <iommu support='no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <migration_features>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <live/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <uri_transports>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <uri_transport>tcp</uri_transport>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <uri_transport>rdma</uri_transport>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </uri_transports>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </migration_features>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <topology>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <cells num='1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <cell id='0'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:          <memory unit='KiB'>7864320</memory>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:          <pages unit='KiB' size='4'>1966080</pages>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:          <pages unit='KiB' size='2048'>0</pages>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:          <distances>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:            <sibling id='0' value='10'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:          </distances>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:          <cpus num='8'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:          </cpus>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        </cell>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </cells>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </topology>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <cache>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </cache>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <secmodel>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model>selinux</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <doi>0</doi>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </secmodel>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <secmodel>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model>dac</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <doi>0</doi>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </secmodel>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  </host>
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <guest>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <os_type>hvm</os_type>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <arch name='i686'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <wordsize>32</wordsize>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <domain type='qemu'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <domain type='kvm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </arch>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <features>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <pae/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <nonpae/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <acpi default='on' toggle='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <apic default='on' toggle='no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <cpuselection/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <deviceboot/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <disksnapshot default='on' toggle='no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <externalSnapshot/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </features>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  </guest>
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <guest>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <os_type>hvm</os_type>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <arch name='x86_64'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <wordsize>64</wordsize>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <domain type='qemu'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <domain type='kvm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </arch>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <features>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <acpi default='on' toggle='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <apic default='on' toggle='no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <cpuselection/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <deviceboot/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <disksnapshot default='on' toggle='no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <externalSnapshot/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </features>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  </guest>
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 
Nov 24 13:43:08 np0005533938 nova_compute[270693]: </capabilities>
Nov 24 13:43:08 np0005533938 nova_compute[270693]: #033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.479 270697 WARNING nova.virt.libvirt.driver [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.480 270697 DEBUG nova.virt.libvirt.volume.mount [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.481 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.487 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 24 13:43:08 np0005533938 nova_compute[270693]: <domainCapabilities>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <path>/usr/libexec/qemu-kvm</path>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <domain>kvm</domain>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <arch>i686</arch>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <vcpu max='4096'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <iothreads supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <os supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <enum name='firmware'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <loader supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='type'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>rom</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>pflash</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='readonly'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>yes</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>no</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='secure'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>no</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </loader>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  </os>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <cpu>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <mode name='host-passthrough' supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='hostPassthroughMigratable'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>on</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>off</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </mode>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <mode name='maximum' supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='maximumMigratable'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>on</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>off</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </mode>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <mode name='host-model' supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <vendor>AMD</vendor>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='x2apic'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='tsc-deadline'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='hypervisor'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='tsc_adjust'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='spec-ctrl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='stibp'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='ssbd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='cmp_legacy'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='overflow-recov'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='succor'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='ibrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='amd-ssbd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='virt-ssbd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='lbrv'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='tsc-scale'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='vmcb-clean'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='flushbyasid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='pause-filter'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='pfthreshold'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='svme-addr-chk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='disable' name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </mode>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <mode name='custom' supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-noTSX'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server-v5'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cooperlake'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cooperlake-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cooperlake-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Denverton'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mpx'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Denverton-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mpx'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Denverton-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Denverton-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Dhyana-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Genoa'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amd-psfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='auto-ibrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='no-nested-data-bp'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='null-sel-clr-base'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='stibp-always-on'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Genoa-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amd-psfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='auto-ibrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='no-nested-data-bp'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='null-sel-clr-base'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='stibp-always-on'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Milan'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Milan-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Milan-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amd-psfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='no-nested-data-bp'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='null-sel-clr-base'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='stibp-always-on'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Rome'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Rome-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Rome-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Rome-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='GraniteRapids'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mcdt-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pbrsb-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='prefetchiti'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='GraniteRapids-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mcdt-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pbrsb-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='prefetchiti'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='GraniteRapids-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx10'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx10-128'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx10-256'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx10-512'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mcdt-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pbrsb-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='prefetchiti'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-noTSX'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-noTSX'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v5'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v6'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v7'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='IvyBridge'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='IvyBridge-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='IvyBridge-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='IvyBridge-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='KnightsMill'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-4fmaps'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-4vnniw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512er'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512pf'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='KnightsMill-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-4fmaps'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-4vnniw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512er'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512pf'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Opteron_G4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fma4'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xop'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Opteron_G4-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fma4'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xop'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Opteron_G5'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fma4'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tbm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xop'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Opteron_G5-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fma4'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tbm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xop'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='SapphireRapids'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='SapphireRapids-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='SapphireRapids-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='SapphireRapids-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='SierraForest'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-ne-convert'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cmpccxadd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mcdt-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pbrsb-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='SierraForest-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-ne-convert'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cmpccxadd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mcdt-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pbrsb-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-v5'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Snowridge'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='core-capability'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mpx'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='split-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Snowridge-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='core-capability'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mpx'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='split-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Snowridge-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='core-capability'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='split-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Snowridge-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='core-capability'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='split-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Snowridge-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='athlon'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnow'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnowext'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='athlon-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnow'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnowext'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='core2duo'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='core2duo-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='coreduo'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='coreduo-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='n270'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='n270-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='phenom'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnow'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnowext'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='phenom-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnow'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnowext'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </mode>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  </cpu>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <memoryBacking supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <enum name='sourceType'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <value>file</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <value>anonymous</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <value>memfd</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  </memoryBacking>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <devices>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <disk supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='diskDevice'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>disk</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>cdrom</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>floppy</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>lun</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='bus'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>fdc</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>scsi</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>usb</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>sata</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='model'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio-transitional</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio-non-transitional</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </disk>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <graphics supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='type'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>vnc</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>egl-headless</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>dbus</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </graphics>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <video supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='modelType'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>vga</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>cirrus</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>none</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>bochs</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>ramfb</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </video>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <hostdev supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='mode'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>subsystem</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='startupPolicy'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>default</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>mandatory</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>requisite</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>optional</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='subsysType'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>usb</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>pci</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>scsi</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='capsType'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='pciBackend'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </hostdev>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <rng supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='model'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio-transitional</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio-non-transitional</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='backendModel'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>random</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>egd</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>builtin</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </rng>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <filesystem supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='driverType'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>path</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>handle</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtiofs</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </filesystem>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <tpm supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='model'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>tpm-tis</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>tpm-crb</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='backendModel'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>emulator</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>external</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='backendVersion'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>2.0</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </tpm>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <redirdev supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='bus'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>usb</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </redirdev>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <channel supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='type'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>pty</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>unix</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </channel>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <crypto supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='model'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='type'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>qemu</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='backendModel'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>builtin</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </crypto>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <interface supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='backendType'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>default</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>passt</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </interface>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <panic supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='model'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>isa</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>hyperv</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </panic>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <console supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='type'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>null</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>vc</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>pty</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>dev</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>file</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>pipe</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>stdio</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>udp</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>tcp</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>unix</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>qemu-vdagent</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>dbus</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </console>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  </devices>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <features>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <gic supported='no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <vmcoreinfo supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <genid supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <backingStoreInput supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <backup supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <async-teardown supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <ps2 supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <sev supported='no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <sgx supported='no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <hyperv supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='features'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>relaxed</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>vapic</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>spinlocks</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>vpindex</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>runtime</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>synic</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>stimer</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>reset</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>vendor_id</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>frequencies</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>reenlightenment</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>tlbflush</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>ipi</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>avic</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>emsr_bitmap</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>xmm_input</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <defaults>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <spinlocks>4095</spinlocks>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <stimer_direct>on</stimer_direct>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <tlbflush_direct>on</tlbflush_direct>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <tlbflush_extended>on</tlbflush_extended>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </defaults>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </hyperv>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <launchSecurity supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='sectype'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>tdx</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </launchSecurity>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  </features>
Nov 24 13:43:08 np0005533938 nova_compute[270693]: </domainCapabilities>
Nov 24 13:43:08 np0005533938 nova_compute[270693]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.492 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 24 13:43:08 np0005533938 nova_compute[270693]: <domainCapabilities>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <path>/usr/libexec/qemu-kvm</path>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <domain>kvm</domain>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <arch>i686</arch>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <vcpu max='240'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <iothreads supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <os supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <enum name='firmware'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <loader supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='type'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>rom</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>pflash</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='readonly'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>yes</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>no</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='secure'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>no</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </loader>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  </os>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <cpu>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <mode name='host-passthrough' supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='hostPassthroughMigratable'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>on</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>off</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </mode>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <mode name='maximum' supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='maximumMigratable'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>on</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>off</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </mode>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <mode name='host-model' supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <vendor>AMD</vendor>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='x2apic'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='tsc-deadline'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='hypervisor'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='tsc_adjust'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='spec-ctrl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='stibp'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='ssbd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='cmp_legacy'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='overflow-recov'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='succor'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='ibrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='amd-ssbd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='virt-ssbd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='lbrv'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='tsc-scale'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='vmcb-clean'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='flushbyasid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='pause-filter'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='pfthreshold'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='svme-addr-chk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='disable' name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </mode>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <mode name='custom' supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-noTSX'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server-v5'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cooperlake'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cooperlake-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cooperlake-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Denverton'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mpx'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Denverton-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mpx'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Denverton-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Denverton-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Dhyana-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Genoa'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amd-psfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='auto-ibrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='no-nested-data-bp'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='null-sel-clr-base'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='stibp-always-on'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Genoa-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amd-psfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='auto-ibrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='no-nested-data-bp'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='null-sel-clr-base'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='stibp-always-on'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Milan'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Milan-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Milan-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amd-psfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='no-nested-data-bp'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='null-sel-clr-base'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='stibp-always-on'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Rome'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Rome-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Rome-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Rome-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='GraniteRapids'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mcdt-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pbrsb-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='prefetchiti'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='GraniteRapids-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mcdt-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pbrsb-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='prefetchiti'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='GraniteRapids-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx10'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx10-128'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx10-256'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx10-512'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mcdt-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pbrsb-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='prefetchiti'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-noTSX'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-noTSX'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v5'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v6'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v7'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='IvyBridge'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='IvyBridge-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='IvyBridge-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='IvyBridge-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='KnightsMill'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-4fmaps'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-4vnniw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512er'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512pf'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='KnightsMill-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-4fmaps'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-4vnniw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512er'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512pf'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Opteron_G4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fma4'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xop'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Opteron_G4-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fma4'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xop'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Opteron_G5'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fma4'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tbm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xop'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Opteron_G5-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fma4'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tbm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xop'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='SapphireRapids'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='SapphireRapids-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='SapphireRapids-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='SapphireRapids-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='SierraForest'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-ne-convert'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cmpccxadd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mcdt-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pbrsb-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='SierraForest-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-ne-convert'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cmpccxadd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mcdt-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pbrsb-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-v5'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Snowridge'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='core-capability'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mpx'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='split-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Snowridge-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='core-capability'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mpx'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='split-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Snowridge-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='core-capability'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='split-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Snowridge-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='core-capability'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='split-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Snowridge-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='athlon'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnow'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnowext'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='athlon-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnow'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnowext'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='core2duo'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='core2duo-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='coreduo'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='coreduo-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='n270'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='n270-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='phenom'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnow'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnowext'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='phenom-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnow'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnowext'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </mode>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  </cpu>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <memoryBacking supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <enum name='sourceType'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <value>file</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <value>anonymous</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <value>memfd</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  </memoryBacking>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <devices>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <disk supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='diskDevice'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>disk</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>cdrom</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>floppy</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>lun</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='bus'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>ide</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>fdc</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>scsi</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>usb</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>sata</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='model'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio-transitional</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio-non-transitional</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </disk>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <graphics supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='type'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>vnc</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>egl-headless</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>dbus</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </graphics>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <video supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='modelType'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>vga</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>cirrus</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>none</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>bochs</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>ramfb</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </video>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <hostdev supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='mode'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>subsystem</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='startupPolicy'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>default</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>mandatory</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>requisite</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>optional</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='subsysType'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>usb</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>pci</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>scsi</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='capsType'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='pciBackend'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </hostdev>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <rng supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='model'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio-transitional</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio-non-transitional</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='backendModel'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>random</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>egd</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>builtin</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </rng>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <filesystem supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='driverType'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>path</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>handle</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtiofs</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </filesystem>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <tpm supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='model'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>tpm-tis</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>tpm-crb</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='backendModel'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>emulator</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>external</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='backendVersion'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>2.0</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </tpm>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <redirdev supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='bus'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>usb</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </redirdev>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <channel supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='type'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>pty</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>unix</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </channel>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <crypto supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='model'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='type'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>qemu</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='backendModel'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>builtin</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </crypto>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <interface supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='backendType'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>default</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>passt</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </interface>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <panic supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='model'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>isa</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>hyperv</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </panic>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <console supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='type'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>null</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>vc</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>pty</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>dev</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>file</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>pipe</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>stdio</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>udp</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>tcp</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>unix</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>qemu-vdagent</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>dbus</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </console>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  </devices>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <features>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <gic supported='no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <vmcoreinfo supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <genid supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <backingStoreInput supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <backup supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <async-teardown supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <ps2 supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <sev supported='no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <sgx supported='no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <hyperv supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='features'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>relaxed</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>vapic</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>spinlocks</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>vpindex</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>runtime</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>synic</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>stimer</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>reset</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>vendor_id</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>frequencies</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>reenlightenment</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>tlbflush</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>ipi</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>avic</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>emsr_bitmap</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>xmm_input</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <defaults>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <spinlocks>4095</spinlocks>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <stimer_direct>on</stimer_direct>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <tlbflush_direct>on</tlbflush_direct>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <tlbflush_extended>on</tlbflush_extended>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </defaults>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </hyperv>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <launchSecurity supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='sectype'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>tdx</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </launchSecurity>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  </features>
Nov 24 13:43:08 np0005533938 nova_compute[270693]: </domainCapabilities>
Nov 24 13:43:08 np0005533938 nova_compute[270693]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.516 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.521 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 24 13:43:08 np0005533938 nova_compute[270693]: <domainCapabilities>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <path>/usr/libexec/qemu-kvm</path>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <domain>kvm</domain>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <arch>x86_64</arch>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <vcpu max='4096'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <iothreads supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <os supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <enum name='firmware'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <value>efi</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <loader supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='type'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>rom</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>pflash</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='readonly'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>yes</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>no</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='secure'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>yes</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>no</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </loader>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  </os>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <cpu>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <mode name='host-passthrough' supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='hostPassthroughMigratable'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>on</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>off</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </mode>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <mode name='maximum' supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='maximumMigratable'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>on</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>off</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </mode>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <mode name='host-model' supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <vendor>AMD</vendor>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='x2apic'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='tsc-deadline'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='hypervisor'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='tsc_adjust'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='spec-ctrl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='stibp'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='ssbd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='cmp_legacy'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='overflow-recov'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='succor'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='ibrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='amd-ssbd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='virt-ssbd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='lbrv'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='tsc-scale'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='vmcb-clean'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='flushbyasid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='pause-filter'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='pfthreshold'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='svme-addr-chk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='disable' name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </mode>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <mode name='custom' supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-noTSX'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server-v5'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cooperlake'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cooperlake-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cooperlake-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Denverton'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mpx'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Denverton-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mpx'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Denverton-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Denverton-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Dhyana-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Genoa'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amd-psfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='auto-ibrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='no-nested-data-bp'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='null-sel-clr-base'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='stibp-always-on'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Genoa-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amd-psfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='auto-ibrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='no-nested-data-bp'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='null-sel-clr-base'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='stibp-always-on'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Milan'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Milan-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Milan-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amd-psfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='no-nested-data-bp'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='null-sel-clr-base'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='stibp-always-on'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Rome'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Rome-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Rome-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Rome-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='GraniteRapids'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mcdt-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pbrsb-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='prefetchiti'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='GraniteRapids-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mcdt-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pbrsb-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='prefetchiti'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='GraniteRapids-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx10'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx10-128'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx10-256'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx10-512'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mcdt-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pbrsb-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='prefetchiti'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-noTSX'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-noTSX'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v5'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v6'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v7'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='IvyBridge'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='IvyBridge-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='IvyBridge-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='IvyBridge-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='KnightsMill'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-4fmaps'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-4vnniw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512er'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512pf'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='KnightsMill-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-4fmaps'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-4vnniw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512er'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512pf'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Opteron_G4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fma4'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xop'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Opteron_G4-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fma4'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xop'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Opteron_G5'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fma4'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tbm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xop'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Opteron_G5-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fma4'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tbm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xop'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='SapphireRapids'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='SapphireRapids-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='SapphireRapids-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='SapphireRapids-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='SierraForest'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-ne-convert'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cmpccxadd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mcdt-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pbrsb-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='SierraForest-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-ne-convert'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cmpccxadd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mcdt-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pbrsb-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-v5'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Snowridge'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='core-capability'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mpx'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='split-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Snowridge-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='core-capability'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mpx'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='split-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Snowridge-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='core-capability'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='split-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Snowridge-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='core-capability'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='split-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Snowridge-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='athlon'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnow'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnowext'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='athlon-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnow'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnowext'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='core2duo'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='core2duo-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='coreduo'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='coreduo-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='n270'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='n270-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='phenom'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnow'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnowext'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='phenom-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnow'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnowext'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </mode>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  </cpu>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <memoryBacking supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <enum name='sourceType'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <value>file</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <value>anonymous</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <value>memfd</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  </memoryBacking>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <devices>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <disk supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='diskDevice'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>disk</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>cdrom</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>floppy</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>lun</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='bus'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>fdc</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>scsi</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>usb</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>sata</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='model'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio-transitional</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio-non-transitional</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </disk>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <graphics supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='type'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>vnc</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>egl-headless</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>dbus</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </graphics>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <video supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='modelType'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>vga</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>cirrus</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>none</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>bochs</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>ramfb</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </video>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <hostdev supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='mode'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>subsystem</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='startupPolicy'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>default</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>mandatory</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>requisite</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>optional</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='subsysType'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>usb</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>pci</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>scsi</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='capsType'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='pciBackend'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </hostdev>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <rng supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='model'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio-transitional</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio-non-transitional</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='backendModel'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>random</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>egd</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>builtin</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </rng>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <filesystem supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='driverType'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>path</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>handle</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtiofs</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </filesystem>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <tpm supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='model'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>tpm-tis</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>tpm-crb</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='backendModel'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>emulator</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>external</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='backendVersion'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>2.0</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </tpm>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <redirdev supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='bus'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>usb</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </redirdev>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <channel supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='type'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>pty</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>unix</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </channel>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <crypto supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='model'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='type'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>qemu</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='backendModel'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>builtin</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </crypto>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <interface supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='backendType'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>default</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>passt</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </interface>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <panic supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='model'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>isa</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>hyperv</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </panic>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <console supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='type'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>null</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>vc</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>pty</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>dev</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>file</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>pipe</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>stdio</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>udp</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>tcp</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>unix</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>qemu-vdagent</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>dbus</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </console>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  </devices>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <features>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <gic supported='no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <vmcoreinfo supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <genid supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <backingStoreInput supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <backup supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <async-teardown supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <ps2 supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <sev supported='no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <sgx supported='no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <hyperv supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='features'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>relaxed</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>vapic</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>spinlocks</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>vpindex</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>runtime</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>synic</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>stimer</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>reset</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>vendor_id</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>frequencies</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>reenlightenment</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>tlbflush</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>ipi</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>avic</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>emsr_bitmap</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>xmm_input</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <defaults>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <spinlocks>4095</spinlocks>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <stimer_direct>on</stimer_direct>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <tlbflush_direct>on</tlbflush_direct>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <tlbflush_extended>on</tlbflush_extended>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </defaults>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </hyperv>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <launchSecurity supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='sectype'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>tdx</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </launchSecurity>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  </features>
Nov 24 13:43:08 np0005533938 nova_compute[270693]: </domainCapabilities>
Nov 24 13:43:08 np0005533938 nova_compute[270693]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.575 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 24 13:43:08 np0005533938 nova_compute[270693]: <domainCapabilities>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <path>/usr/libexec/qemu-kvm</path>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <domain>kvm</domain>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <arch>x86_64</arch>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <vcpu max='240'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <iothreads supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <os supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <enum name='firmware'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <loader supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='type'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>rom</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>pflash</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='readonly'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>yes</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>no</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='secure'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>no</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </loader>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  </os>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <cpu>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <mode name='host-passthrough' supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='hostPassthroughMigratable'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>on</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>off</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </mode>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <mode name='maximum' supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='maximumMigratable'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>on</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>off</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </mode>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <mode name='host-model' supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <vendor>AMD</vendor>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='x2apic'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='tsc-deadline'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='hypervisor'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='tsc_adjust'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='spec-ctrl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='stibp'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='ssbd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='cmp_legacy'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='overflow-recov'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='succor'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='ibrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='amd-ssbd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='virt-ssbd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='lbrv'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='tsc-scale'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='vmcb-clean'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='flushbyasid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='pause-filter'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='pfthreshold'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='svme-addr-chk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <feature policy='disable' name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </mode>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <mode name='custom' supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-noTSX'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Broadwell-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cascadelake-Server-v5'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cooperlake'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cooperlake-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Cooperlake-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Denverton'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mpx'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Denverton-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mpx'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Denverton-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Denverton-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Dhyana-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Genoa'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amd-psfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='auto-ibrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='no-nested-data-bp'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='null-sel-clr-base'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='stibp-always-on'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Genoa-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amd-psfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='auto-ibrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='no-nested-data-bp'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='null-sel-clr-base'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='stibp-always-on'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Milan'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Milan-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Milan-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amd-psfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='no-nested-data-bp'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='null-sel-clr-base'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='stibp-always-on'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Rome'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Rome-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Rome-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-Rome-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='EPYC-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='GraniteRapids'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mcdt-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pbrsb-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='prefetchiti'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='GraniteRapids-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mcdt-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pbrsb-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='prefetchiti'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='GraniteRapids-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx10'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx10-128'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx10-256'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx10-512'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mcdt-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pbrsb-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='prefetchiti'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-noTSX'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Haswell-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-noTSX'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v5'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v6'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Icelake-Server-v7'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='IvyBridge'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='IvyBridge-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='IvyBridge-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='IvyBridge-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='KnightsMill'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-4fmaps'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-4vnniw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512er'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512pf'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='KnightsMill-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-4fmaps'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-4vnniw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512er'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512pf'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Opteron_G4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fma4'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xop'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Opteron_G4-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fma4'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xop'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Opteron_G5'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fma4'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tbm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xop'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Opteron_G5-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fma4'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tbm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xop'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='SapphireRapids'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='SapphireRapids-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='SapphireRapids-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='SapphireRapids-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='amx-tile'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-bf16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-fp16'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512-vpopcntdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bitalg'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vbmi2'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrc'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fzrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='la57'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='taa-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='tsx-ldtrk'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xfd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='SierraForest'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-ne-convert'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cmpccxadd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mcdt-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pbrsb-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='SierraForest-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-ifma'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-ne-convert'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx-vnni-int8'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='bus-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cmpccxadd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fbsdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='fsrs'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ibrs-all'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mcdt-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pbrsb-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='psdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='sbdr-ssdp-no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='serialize'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vaes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='vpclmulqdq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Client-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='hle'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='rtm'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Skylake-Server-v5'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512bw'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512cd'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512dq'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512f'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='avx512vl'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='invpcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pcid'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='pku'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Snowridge'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='core-capability'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mpx'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='split-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Snowridge-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='core-capability'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='mpx'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='split-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Snowridge-v2'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='core-capability'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='split-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Snowridge-v3'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='core-capability'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='split-lock-detect'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='Snowridge-v4'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='cldemote'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='erms'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='gfni'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdir64b'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='movdiri'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='xsaves'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='athlon'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnow'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnowext'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='athlon-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnow'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnowext'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='core2duo'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='core2duo-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='coreduo'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='coreduo-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='n270'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='n270-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='ss'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='phenom'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnow'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnowext'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <blockers model='phenom-v1'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnow'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <feature name='3dnowext'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </blockers>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </mode>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  </cpu>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <memoryBacking supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <enum name='sourceType'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <value>file</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <value>anonymous</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <value>memfd</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  </memoryBacking>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <devices>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <disk supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='diskDevice'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>disk</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>cdrom</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>floppy</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>lun</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='bus'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>ide</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>fdc</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>scsi</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>usb</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>sata</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='model'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio-transitional</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio-non-transitional</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </disk>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <graphics supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='type'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>vnc</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>egl-headless</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>dbus</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </graphics>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <video supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='modelType'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>vga</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>cirrus</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>none</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>bochs</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>ramfb</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </video>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <hostdev supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='mode'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>subsystem</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='startupPolicy'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>default</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>mandatory</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>requisite</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>optional</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='subsysType'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>usb</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>pci</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>scsi</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='capsType'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='pciBackend'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </hostdev>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <rng supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='model'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio-transitional</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtio-non-transitional</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='backendModel'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>random</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>egd</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>builtin</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </rng>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <filesystem supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='driverType'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>path</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>handle</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>virtiofs</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </filesystem>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <tpm supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='model'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>tpm-tis</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>tpm-crb</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='backendModel'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>emulator</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>external</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='backendVersion'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>2.0</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </tpm>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <redirdev supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='bus'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>usb</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </redirdev>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <channel supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='type'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>pty</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>unix</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </channel>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <crypto supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='model'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='type'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>qemu</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='backendModel'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>builtin</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </crypto>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <interface supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='backendType'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>default</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>passt</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </interface>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <panic supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='model'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>isa</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>hyperv</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </panic>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <console supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='type'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>null</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>vc</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>pty</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>dev</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>file</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>pipe</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>stdio</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>udp</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>tcp</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>unix</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>qemu-vdagent</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>dbus</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </console>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  </devices>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  <features>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <gic supported='no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <vmcoreinfo supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <genid supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <backingStoreInput supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <backup supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <async-teardown supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <ps2 supported='yes'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <sev supported='no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <sgx supported='no'/>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <hyperv supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='features'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>relaxed</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>vapic</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>spinlocks</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>vpindex</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>runtime</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>synic</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>stimer</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>reset</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>vendor_id</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>frequencies</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>reenlightenment</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>tlbflush</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>ipi</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>avic</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>emsr_bitmap</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>xmm_input</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <defaults>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <spinlocks>4095</spinlocks>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <stimer_direct>on</stimer_direct>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <tlbflush_direct>on</tlbflush_direct>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <tlbflush_extended>on</tlbflush_extended>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </defaults>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </hyperv>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    <launchSecurity supported='yes'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      <enum name='sectype'>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:        <value>tdx</value>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:      </enum>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:    </launchSecurity>
Nov 24 13:43:08 np0005533938 nova_compute[270693]:  </features>
Nov 24 13:43:08 np0005533938 nova_compute[270693]: </domainCapabilities>
Nov 24 13:43:08 np0005533938 nova_compute[270693]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.631 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.631 270697 INFO nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Secure Boot support detected#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.633 270697 INFO nova.virt.libvirt.driver [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.633 270697 INFO nova.virt.libvirt.driver [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.641 270697 DEBUG nova.virt.libvirt.driver [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.672 270697 INFO nova.virt.node [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Determined node identity d1cce7ec-de83-4810-91f8-1852891da8a6 from /var/lib/nova/compute_id#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.696 270697 WARNING nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Compute nodes ['d1cce7ec-de83-4810-91f8-1852891da8a6'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.738 270697 INFO nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.790 270697 WARNING nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.790 270697 DEBUG oslo_concurrency.lockutils [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.790 270697 DEBUG oslo_concurrency.lockutils [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.790 270697 DEBUG oslo_concurrency.lockutils [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.790 270697 DEBUG nova.compute.resource_tracker [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 13:43:08 np0005533938 nova_compute[270693]: 2025-11-24 18:43:08.791 270697 DEBUG oslo_concurrency.processutils [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:43:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:43:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3802188247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:43:09 np0005533938 nova_compute[270693]: 2025-11-24 18:43:09.190 270697 DEBUG oslo_concurrency.processutils [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:43:09 np0005533938 systemd[1]: Starting libvirt nodedev daemon...
Nov 24 13:43:09 np0005533938 systemd[1]: Started libvirt nodedev daemon.
Nov 24 13:43:09 np0005533938 nova_compute[270693]: 2025-11-24 18:43:09.452 270697 WARNING nova.virt.libvirt.driver [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 13:43:09 np0005533938 nova_compute[270693]: 2025-11-24 18:43:09.453 270697 DEBUG nova.compute.resource_tracker [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5145MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 13:43:09 np0005533938 nova_compute[270693]: 2025-11-24 18:43:09.454 270697 DEBUG oslo_concurrency.lockutils [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:43:09 np0005533938 nova_compute[270693]: 2025-11-24 18:43:09.454 270697 DEBUG oslo_concurrency.lockutils [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:43:09 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:09 np0005533938 nova_compute[270693]: 2025-11-24 18:43:09.485 270697 WARNING nova.compute.resource_tracker [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] No compute node record for compute-0.ctlplane.example.com:d1cce7ec-de83-4810-91f8-1852891da8a6: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host d1cce7ec-de83-4810-91f8-1852891da8a6 could not be found.#033[00m
Nov 24 13:43:09 np0005533938 nova_compute[270693]: 2025-11-24 18:43:09.510 270697 INFO nova.compute.resource_tracker [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: d1cce7ec-de83-4810-91f8-1852891da8a6#033[00m
Nov 24 13:43:09 np0005533938 nova_compute[270693]: 2025-11-24 18:43:09.598 270697 DEBUG nova.compute.resource_tracker [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 13:43:09 np0005533938 nova_compute[270693]: 2025-11-24 18:43:09.598 270697 DEBUG nova.compute.resource_tracker [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 13:43:10 np0005533938 nova_compute[270693]: 2025-11-24 18:43:10.716 270697 INFO nova.scheduler.client.report [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [req-d063727b-b741-4d91-984e-65ebae7920b5] Created resource provider record via placement API for resource provider with UUID d1cce7ec-de83-4810-91f8-1852891da8a6 and name compute-0.ctlplane.example.com.#033[00m
Nov 24 13:43:11 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:11 np0005533938 nova_compute[270693]: 2025-11-24 18:43:11.560 270697 DEBUG oslo_concurrency.processutils [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:43:11 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:43:11 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1069805352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:43:11 np0005533938 nova_compute[270693]: 2025-11-24 18:43:11.966 270697 DEBUG oslo_concurrency.processutils [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:43:11 np0005533938 nova_compute[270693]: 2025-11-24 18:43:11.972 270697 DEBUG nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 24 13:43:11 np0005533938 nova_compute[270693]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Nov 24 13:43:11 np0005533938 nova_compute[270693]: 2025-11-24 18:43:11.973 270697 INFO nova.virt.libvirt.host [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] kernel doesn't support AMD SEV#033[00m
Nov 24 13:43:11 np0005533938 nova_compute[270693]: 2025-11-24 18:43:11.974 270697 DEBUG nova.compute.provider_tree [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Updating inventory in ProviderTree for provider d1cce7ec-de83-4810-91f8-1852891da8a6 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 24 13:43:11 np0005533938 nova_compute[270693]: 2025-11-24 18:43:11.975 270697 DEBUG nova.virt.libvirt.driver [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 24 13:43:12 np0005533938 nova_compute[270693]: 2025-11-24 18:43:12.041 270697 DEBUG nova.scheduler.client.report [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Updated inventory for provider d1cce7ec-de83-4810-91f8-1852891da8a6 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 24 13:43:12 np0005533938 nova_compute[270693]: 2025-11-24 18:43:12.042 270697 DEBUG nova.compute.provider_tree [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Updating resource provider d1cce7ec-de83-4810-91f8-1852891da8a6 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 24 13:43:12 np0005533938 nova_compute[270693]: 2025-11-24 18:43:12.042 270697 DEBUG nova.compute.provider_tree [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Updating inventory in ProviderTree for provider d1cce7ec-de83-4810-91f8-1852891da8a6 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 24 13:43:12 np0005533938 nova_compute[270693]: 2025-11-24 18:43:12.143 270697 DEBUG nova.compute.provider_tree [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Updating resource provider d1cce7ec-de83-4810-91f8-1852891da8a6 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 24 13:43:12 np0005533938 nova_compute[270693]: 2025-11-24 18:43:12.169 270697 DEBUG nova.compute.resource_tracker [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 13:43:12 np0005533938 nova_compute[270693]: 2025-11-24 18:43:12.169 270697 DEBUG oslo_concurrency.lockutils [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:43:12 np0005533938 nova_compute[270693]: 2025-11-24 18:43:12.169 270697 DEBUG nova.service [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Nov 24 13:43:12 np0005533938 nova_compute[270693]: 2025-11-24 18:43:12.281 270697 DEBUG nova.service [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Nov 24 13:43:12 np0005533938 nova_compute[270693]: 2025-11-24 18:43:12.282 270697 DEBUG nova.servicegroup.drivers.db [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Nov 24 13:43:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:43:13 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:15 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:17 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:43:19 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:21 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:43:22.735 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:43:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:43:22.736 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:43:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:43:22.736 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:43:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:43:23 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:43:23 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:43:23 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:43:23 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:43:23 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:43:23 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:43:23 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 2bb85ddb-2e73-4fb6-8afd-263fae2242a0 does not exist
Nov 24 13:43:23 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 033fa4f9-854a-45a2-be67-93c20c2a0cef does not exist
Nov 24 13:43:23 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 1606bbfd-12c6-4ea0-a477-78776478a4d1 does not exist
Nov 24 13:43:23 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:43:23 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:43:23 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:43:23 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:43:23 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:43:23 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:43:23 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:23 np0005533938 podman[271333]: 2025-11-24 18:43:23.828619577 +0000 UTC m=+0.037309128 container create eae5eb953d6d1c2a8a453912c2b1f52c1d14feca8fcbd65a7f57bc7e7756fd9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:43:23 np0005533938 systemd[1]: Started libpod-conmon-eae5eb953d6d1c2a8a453912c2b1f52c1d14feca8fcbd65a7f57bc7e7756fd9e.scope.
Nov 24 13:43:23 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:43:23 np0005533938 podman[271333]: 2025-11-24 18:43:23.812136187 +0000 UTC m=+0.020825748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:43:23 np0005533938 podman[271333]: 2025-11-24 18:43:23.907596962 +0000 UTC m=+0.116286503 container init eae5eb953d6d1c2a8a453912c2b1f52c1d14feca8fcbd65a7f57bc7e7756fd9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mirzakhani, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 13:43:23 np0005533938 podman[271333]: 2025-11-24 18:43:23.921084247 +0000 UTC m=+0.129773788 container start eae5eb953d6d1c2a8a453912c2b1f52c1d14feca8fcbd65a7f57bc7e7756fd9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 13:43:23 np0005533938 podman[271333]: 2025-11-24 18:43:23.924555233 +0000 UTC m=+0.133244764 container attach eae5eb953d6d1c2a8a453912c2b1f52c1d14feca8fcbd65a7f57bc7e7756fd9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:43:23 np0005533938 amazing_mirzakhani[271349]: 167 167
Nov 24 13:43:23 np0005533938 systemd[1]: libpod-eae5eb953d6d1c2a8a453912c2b1f52c1d14feca8fcbd65a7f57bc7e7756fd9e.scope: Deactivated successfully.
Nov 24 13:43:23 np0005533938 podman[271333]: 2025-11-24 18:43:23.930578013 +0000 UTC m=+0.139267554 container died eae5eb953d6d1c2a8a453912c2b1f52c1d14feca8fcbd65a7f57bc7e7756fd9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mirzakhani, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 13:43:23 np0005533938 systemd[1]: var-lib-containers-storage-overlay-c126f296c1763689b41f82d91a052fe0037983a64e804944cae46df09f64b79a-merged.mount: Deactivated successfully.
Nov 24 13:43:23 np0005533938 podman[271333]: 2025-11-24 18:43:23.974559577 +0000 UTC m=+0.183249118 container remove eae5eb953d6d1c2a8a453912c2b1f52c1d14feca8fcbd65a7f57bc7e7756fd9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:43:23 np0005533938 systemd[1]: libpod-conmon-eae5eb953d6d1c2a8a453912c2b1f52c1d14feca8fcbd65a7f57bc7e7756fd9e.scope: Deactivated successfully.
Nov 24 13:43:23 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:43:23 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:43:23 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:43:24 np0005533938 podman[271361]: 2025-11-24 18:43:24.10375863 +0000 UTC m=+0.112510789 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 13:43:24 np0005533938 podman[271397]: 2025-11-24 18:43:24.189032691 +0000 UTC m=+0.069926290 container create 4496f01bdd7bae9ec022b71c24dd36619fa46a4c498e6c97377b4960c220293d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:43:24 np0005533938 systemd[1]: Started libpod-conmon-4496f01bdd7bae9ec022b71c24dd36619fa46a4c498e6c97377b4960c220293d.scope.
Nov 24 13:43:24 np0005533938 podman[271397]: 2025-11-24 18:43:24.16286309 +0000 UTC m=+0.043756769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:43:24 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:43:24 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf158c4f09de8e2571eefc6e6102919afffde520b16e185a98ed527017257364/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:43:24 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf158c4f09de8e2571eefc6e6102919afffde520b16e185a98ed527017257364/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:43:24 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf158c4f09de8e2571eefc6e6102919afffde520b16e185a98ed527017257364/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:43:24 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf158c4f09de8e2571eefc6e6102919afffde520b16e185a98ed527017257364/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:43:24 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf158c4f09de8e2571eefc6e6102919afffde520b16e185a98ed527017257364/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:43:24 np0005533938 podman[271397]: 2025-11-24 18:43:24.297368855 +0000 UTC m=+0.178262494 container init 4496f01bdd7bae9ec022b71c24dd36619fa46a4c498e6c97377b4960c220293d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brattain, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 24 13:43:24 np0005533938 podman[271397]: 2025-11-24 18:43:24.318758987 +0000 UTC m=+0.199652616 container start 4496f01bdd7bae9ec022b71c24dd36619fa46a4c498e6c97377b4960c220293d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brattain, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:43:24 np0005533938 podman[271397]: 2025-11-24 18:43:24.323602998 +0000 UTC m=+0.204496617 container attach 4496f01bdd7bae9ec022b71c24dd36619fa46a4c498e6c97377b4960c220293d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brattain, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:43:25 np0005533938 charming_brattain[271413]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:43:25 np0005533938 charming_brattain[271413]: --> relative data size: 1.0
Nov 24 13:43:25 np0005533938 charming_brattain[271413]: --> All data devices are unavailable
Nov 24 13:43:25 np0005533938 systemd[1]: libpod-4496f01bdd7bae9ec022b71c24dd36619fa46a4c498e6c97377b4960c220293d.scope: Deactivated successfully.
Nov 24 13:43:25 np0005533938 systemd[1]: libpod-4496f01bdd7bae9ec022b71c24dd36619fa46a4c498e6c97377b4960c220293d.scope: Consumed 1.057s CPU time.
Nov 24 13:43:25 np0005533938 podman[271397]: 2025-11-24 18:43:25.420650601 +0000 UTC m=+1.301544220 container died 4496f01bdd7bae9ec022b71c24dd36619fa46a4c498e6c97377b4960c220293d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:43:25 np0005533938 systemd[1]: var-lib-containers-storage-overlay-bf158c4f09de8e2571eefc6e6102919afffde520b16e185a98ed527017257364-merged.mount: Deactivated successfully.
Nov 24 13:43:25 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:25 np0005533938 podman[271397]: 2025-11-24 18:43:25.474826168 +0000 UTC m=+1.355719747 container remove 4496f01bdd7bae9ec022b71c24dd36619fa46a4c498e6c97377b4960c220293d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:43:25 np0005533938 systemd[1]: libpod-conmon-4496f01bdd7bae9ec022b71c24dd36619fa46a4c498e6c97377b4960c220293d.scope: Deactivated successfully.
Nov 24 13:43:26 np0005533938 podman[271595]: 2025-11-24 18:43:26.042039465 +0000 UTC m=+0.044112798 container create 03ffb6db578ab06e43c09626a4fcf536941952eaf244f0b56f32280e0b961ad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 24 13:43:26 np0005533938 systemd[1]: Started libpod-conmon-03ffb6db578ab06e43c09626a4fcf536941952eaf244f0b56f32280e0b961ad0.scope.
Nov 24 13:43:26 np0005533938 podman[271595]: 2025-11-24 18:43:26.02414616 +0000 UTC m=+0.026219493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:43:26 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:43:26 np0005533938 podman[271595]: 2025-11-24 18:43:26.138173956 +0000 UTC m=+0.140247339 container init 03ffb6db578ab06e43c09626a4fcf536941952eaf244f0b56f32280e0b961ad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 13:43:26 np0005533938 podman[271595]: 2025-11-24 18:43:26.14680503 +0000 UTC m=+0.148878343 container start 03ffb6db578ab06e43c09626a4fcf536941952eaf244f0b56f32280e0b961ad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:43:26 np0005533938 podman[271595]: 2025-11-24 18:43:26.150616345 +0000 UTC m=+0.152689728 container attach 03ffb6db578ab06e43c09626a4fcf536941952eaf244f0b56f32280e0b961ad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 13:43:26 np0005533938 gracious_yalow[271623]: 167 167
Nov 24 13:43:26 np0005533938 systemd[1]: libpod-03ffb6db578ab06e43c09626a4fcf536941952eaf244f0b56f32280e0b961ad0.scope: Deactivated successfully.
Nov 24 13:43:26 np0005533938 podman[271595]: 2025-11-24 18:43:26.153939918 +0000 UTC m=+0.156013271 container died 03ffb6db578ab06e43c09626a4fcf536941952eaf244f0b56f32280e0b961ad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 24 13:43:26 np0005533938 podman[271609]: 2025-11-24 18:43:26.163330701 +0000 UTC m=+0.067659313 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Nov 24 13:43:26 np0005533938 podman[271612]: 2025-11-24 18:43:26.165987567 +0000 UTC m=+0.069338325 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 24 13:43:26 np0005533938 systemd[1]: var-lib-containers-storage-overlay-b57569a0cf45b2bec6f0f9976dc371d98c5c8a949885250b04d31b4d02792205-merged.mount: Deactivated successfully.
Nov 24 13:43:26 np0005533938 podman[271595]: 2025-11-24 18:43:26.192741313 +0000 UTC m=+0.194814616 container remove 03ffb6db578ab06e43c09626a4fcf536941952eaf244f0b56f32280e0b961ad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 24 13:43:26 np0005533938 systemd[1]: libpod-conmon-03ffb6db578ab06e43c09626a4fcf536941952eaf244f0b56f32280e0b961ad0.scope: Deactivated successfully.
Nov 24 13:43:26 np0005533938 podman[271672]: 2025-11-24 18:43:26.359266084 +0000 UTC m=+0.034510619 container create dec15d138efb8eb0268ea81012dfb576139e59a80422ed75d28a7d3d984a4c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jackson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 13:43:26 np0005533938 systemd[1]: Started libpod-conmon-dec15d138efb8eb0268ea81012dfb576139e59a80422ed75d28a7d3d984a4c19.scope.
Nov 24 13:43:26 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:43:26 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0057a89d23385c0c4f3f4e10c3ea695a75e5681609fb8d5be8ef0b5ab25eafa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:43:26 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0057a89d23385c0c4f3f4e10c3ea695a75e5681609fb8d5be8ef0b5ab25eafa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:43:26 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0057a89d23385c0c4f3f4e10c3ea695a75e5681609fb8d5be8ef0b5ab25eafa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:43:26 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0057a89d23385c0c4f3f4e10c3ea695a75e5681609fb8d5be8ef0b5ab25eafa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:43:26 np0005533938 podman[271672]: 2025-11-24 18:43:26.344695892 +0000 UTC m=+0.019940457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:43:26 np0005533938 podman[271672]: 2025-11-24 18:43:26.441334405 +0000 UTC m=+0.116578940 container init dec15d138efb8eb0268ea81012dfb576139e59a80422ed75d28a7d3d984a4c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jackson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:43:26 np0005533938 podman[271672]: 2025-11-24 18:43:26.450312389 +0000 UTC m=+0.125556944 container start dec15d138efb8eb0268ea81012dfb576139e59a80422ed75d28a7d3d984a4c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 24 13:43:26 np0005533938 podman[271672]: 2025-11-24 18:43:26.45518836 +0000 UTC m=+0.130432905 container attach dec15d138efb8eb0268ea81012dfb576139e59a80422ed75d28a7d3d984a4c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jackson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]: {
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:    "0": [
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:        {
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "devices": [
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "/dev/loop3"
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            ],
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "lv_name": "ceph_lv0",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "lv_size": "21470642176",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "name": "ceph_lv0",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "tags": {
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.cluster_name": "ceph",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.crush_device_class": "",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.encrypted": "0",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.osd_id": "0",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.type": "block",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.vdo": "0"
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            },
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "type": "block",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "vg_name": "ceph_vg0"
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:        }
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:    ],
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:    "1": [
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:        {
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "devices": [
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "/dev/loop4"
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            ],
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "lv_name": "ceph_lv1",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "lv_size": "21470642176",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "name": "ceph_lv1",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "tags": {
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.cluster_name": "ceph",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.crush_device_class": "",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.encrypted": "0",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.osd_id": "1",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.type": "block",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.vdo": "0"
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            },
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "type": "block",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "vg_name": "ceph_vg1"
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:        }
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:    ],
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:    "2": [
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:        {
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "devices": [
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "/dev/loop5"
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            ],
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "lv_name": "ceph_lv2",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "lv_size": "21470642176",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "name": "ceph_lv2",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "tags": {
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.cluster_name": "ceph",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.crush_device_class": "",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.encrypted": "0",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.osd_id": "2",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.type": "block",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:                "ceph.vdo": "0"
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            },
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "type": "block",
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:            "vg_name": "ceph_vg2"
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:        }
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]:    ]
Nov 24 13:43:27 np0005533938 frosty_jackson[271688]: }
Nov 24 13:43:27 np0005533938 systemd[1]: libpod-dec15d138efb8eb0268ea81012dfb576139e59a80422ed75d28a7d3d984a4c19.scope: Deactivated successfully.
Nov 24 13:43:27 np0005533938 podman[271672]: 2025-11-24 18:43:27.202392233 +0000 UTC m=+0.877636778 container died dec15d138efb8eb0268ea81012dfb576139e59a80422ed75d28a7d3d984a4c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 13:43:27 np0005533938 systemd[1]: var-lib-containers-storage-overlay-a0057a89d23385c0c4f3f4e10c3ea695a75e5681609fb8d5be8ef0b5ab25eafa-merged.mount: Deactivated successfully.
Nov 24 13:43:27 np0005533938 podman[271672]: 2025-11-24 18:43:27.255145795 +0000 UTC m=+0.930390330 container remove dec15d138efb8eb0268ea81012dfb576139e59a80422ed75d28a7d3d984a4c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:43:27 np0005533938 systemd[1]: libpod-conmon-dec15d138efb8eb0268ea81012dfb576139e59a80422ed75d28a7d3d984a4c19.scope: Deactivated successfully.
Nov 24 13:43:27 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:43:27 np0005533938 podman[271848]: 2025-11-24 18:43:27.880427816 +0000 UTC m=+0.039954025 container create 6d465888e870cfa0f2916674016ccab9b07f7fb44219c88c6a9e0fd90196546b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mahavira, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 13:43:27 np0005533938 systemd[1]: Started libpod-conmon-6d465888e870cfa0f2916674016ccab9b07f7fb44219c88c6a9e0fd90196546b.scope.
Nov 24 13:43:27 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:43:27 np0005533938 podman[271848]: 2025-11-24 18:43:27.866211432 +0000 UTC m=+0.025737671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:43:27 np0005533938 podman[271848]: 2025-11-24 18:43:27.97347722 +0000 UTC m=+0.133003479 container init 6d465888e870cfa0f2916674016ccab9b07f7fb44219c88c6a9e0fd90196546b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 13:43:27 np0005533938 podman[271848]: 2025-11-24 18:43:27.97870635 +0000 UTC m=+0.138232559 container start 6d465888e870cfa0f2916674016ccab9b07f7fb44219c88c6a9e0fd90196546b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mahavira, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 13:43:27 np0005533938 inspiring_mahavira[271864]: 167 167
Nov 24 13:43:27 np0005533938 systemd[1]: libpod-6d465888e870cfa0f2916674016ccab9b07f7fb44219c88c6a9e0fd90196546b.scope: Deactivated successfully.
Nov 24 13:43:27 np0005533938 conmon[271864]: conmon 6d465888e870cfa0f291 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6d465888e870cfa0f2916674016ccab9b07f7fb44219c88c6a9e0fd90196546b.scope/container/memory.events
Nov 24 13:43:27 np0005533938 podman[271848]: 2025-11-24 18:43:27.984784501 +0000 UTC m=+0.144310720 container attach 6d465888e870cfa0f2916674016ccab9b07f7fb44219c88c6a9e0fd90196546b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:43:27 np0005533938 podman[271848]: 2025-11-24 18:43:27.98513078 +0000 UTC m=+0.144656999 container died 6d465888e870cfa0f2916674016ccab9b07f7fb44219c88c6a9e0fd90196546b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:43:28 np0005533938 systemd[1]: var-lib-containers-storage-overlay-1edaefc83fb1b60e79734e83ecc5eace055c21f6446ce5c7365c0e170e35dc08-merged.mount: Deactivated successfully.
Nov 24 13:43:28 np0005533938 podman[271848]: 2025-11-24 18:43:28.0253458 +0000 UTC m=+0.184872019 container remove 6d465888e870cfa0f2916674016ccab9b07f7fb44219c88c6a9e0fd90196546b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mahavira, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:43:28 np0005533938 systemd[1]: libpod-conmon-6d465888e870cfa0f2916674016ccab9b07f7fb44219c88c6a9e0fd90196546b.scope: Deactivated successfully.
Nov 24 13:43:28 np0005533938 podman[271887]: 2025-11-24 18:43:28.176413517 +0000 UTC m=+0.039445352 container create d91977f5502a3dccf339d43c185c50a5b936cf816a2cc5fd27261acb76902792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 13:43:28 np0005533938 systemd[1]: Started libpod-conmon-d91977f5502a3dccf339d43c185c50a5b936cf816a2cc5fd27261acb76902792.scope.
Nov 24 13:43:28 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:43:28 np0005533938 podman[271887]: 2025-11-24 18:43:28.160132422 +0000 UTC m=+0.023164347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:43:28 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5167ee9bd6866ba02a63b91664012200d2a832b7e8a2f52d56e465b5b2ea468d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:43:28 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5167ee9bd6866ba02a63b91664012200d2a832b7e8a2f52d56e465b5b2ea468d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:43:28 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5167ee9bd6866ba02a63b91664012200d2a832b7e8a2f52d56e465b5b2ea468d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:43:28 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5167ee9bd6866ba02a63b91664012200d2a832b7e8a2f52d56e465b5b2ea468d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:43:28 np0005533938 podman[271887]: 2025-11-24 18:43:28.266223001 +0000 UTC m=+0.129254856 container init d91977f5502a3dccf339d43c185c50a5b936cf816a2cc5fd27261acb76902792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:43:28 np0005533938 podman[271887]: 2025-11-24 18:43:28.275823509 +0000 UTC m=+0.138855334 container start d91977f5502a3dccf339d43c185c50a5b936cf816a2cc5fd27261acb76902792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 13:43:28 np0005533938 podman[271887]: 2025-11-24 18:43:28.278530037 +0000 UTC m=+0.141561882 container attach d91977f5502a3dccf339d43c185c50a5b936cf816a2cc5fd27261acb76902792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 13:43:29 np0005533938 epic_satoshi[271903]: {
Nov 24 13:43:29 np0005533938 epic_satoshi[271903]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:43:29 np0005533938 epic_satoshi[271903]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:43:29 np0005533938 epic_satoshi[271903]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:43:29 np0005533938 epic_satoshi[271903]:        "osd_id": 0,
Nov 24 13:43:29 np0005533938 epic_satoshi[271903]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:43:29 np0005533938 epic_satoshi[271903]:        "type": "bluestore"
Nov 24 13:43:29 np0005533938 epic_satoshi[271903]:    },
Nov 24 13:43:29 np0005533938 epic_satoshi[271903]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:43:29 np0005533938 epic_satoshi[271903]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:43:29 np0005533938 epic_satoshi[271903]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:43:29 np0005533938 epic_satoshi[271903]:        "osd_id": 1,
Nov 24 13:43:29 np0005533938 epic_satoshi[271903]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:43:29 np0005533938 epic_satoshi[271903]:        "type": "bluestore"
Nov 24 13:43:29 np0005533938 epic_satoshi[271903]:    },
Nov 24 13:43:29 np0005533938 epic_satoshi[271903]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:43:29 np0005533938 epic_satoshi[271903]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:43:29 np0005533938 epic_satoshi[271903]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:43:29 np0005533938 epic_satoshi[271903]:        "osd_id": 2,
Nov 24 13:43:29 np0005533938 epic_satoshi[271903]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:43:29 np0005533938 epic_satoshi[271903]:        "type": "bluestore"
Nov 24 13:43:29 np0005533938 epic_satoshi[271903]:    }
Nov 24 13:43:29 np0005533938 epic_satoshi[271903]: }
Nov 24 13:43:29 np0005533938 systemd[1]: libpod-d91977f5502a3dccf339d43c185c50a5b936cf816a2cc5fd27261acb76902792.scope: Deactivated successfully.
Nov 24 13:43:29 np0005533938 podman[271887]: 2025-11-24 18:43:29.257690788 +0000 UTC m=+1.120722623 container died d91977f5502a3dccf339d43c185c50a5b936cf816a2cc5fd27261acb76902792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_satoshi, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 13:43:29 np0005533938 systemd[1]: var-lib-containers-storage-overlay-5167ee9bd6866ba02a63b91664012200d2a832b7e8a2f52d56e465b5b2ea468d-merged.mount: Deactivated successfully.
Nov 24 13:43:29 np0005533938 podman[271887]: 2025-11-24 18:43:29.325149445 +0000 UTC m=+1.188181280 container remove d91977f5502a3dccf339d43c185c50a5b936cf816a2cc5fd27261acb76902792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 24 13:43:29 np0005533938 systemd[1]: libpod-conmon-d91977f5502a3dccf339d43c185c50a5b936cf816a2cc5fd27261acb76902792.scope: Deactivated successfully.
Nov 24 13:43:29 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:43:29 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:43:29 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:43:29 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:43:29 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 0d916dc8-5ec3-4427-adc3-1f48d8a78d67 does not exist
Nov 24 13:43:29 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 022edc19-dfeb-4192-a1ed-7b5d7446ca13 does not exist
Nov 24 13:43:29 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:30 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:43:30 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:43:30 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 13:43:30 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4022853737' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 13:43:30 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 13:43:30 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4022853737' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 13:43:30 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 13:43:30 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1414935411' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 13:43:30 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 13:43:30 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1414935411' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 13:43:30 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 13:43:30 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2614245368' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 13:43:30 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 13:43:30 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2614245368' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 13:43:31 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:43:33 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:43:34
Nov 24 13:43:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:43:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:43:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'vms', '.rgw.root', '.mgr', 'backups', 'cephfs.cephfs.meta', 'volumes', 'images', 'default.rgw.control', 'default.rgw.log']
Nov 24 13:43:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:43:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:43:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:43:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:43:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:43:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:43:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:43:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:43:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:43:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:43:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:43:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:43:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:43:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:43:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:43:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:43:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:43:35 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:37 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:43:39 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:41 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:43:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:43:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:43:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:43:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:43:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:43:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:43:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:43:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:43:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:43:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:43:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:43:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:43:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:43:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:43:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:43:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:43:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:43:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:43:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:43:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:43:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:43:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:43:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:43:43 np0005533938 nova_compute[270693]: 2025-11-24 18:43:43.284 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:43:43 np0005533938 nova_compute[270693]: 2025-11-24 18:43:43.309 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:43:43 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:45 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:47 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:43:49 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:51 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:43:53 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:55 np0005533938 podman[272001]: 2025-11-24 18:43:55.012323968 +0000 UTC m=+0.098349607 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 24 13:43:55 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:56 np0005533938 podman[272029]: 2025-11-24 18:43:56.963604056 +0000 UTC m=+0.053408279 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:43:56 np0005533938 podman[272030]: 2025-11-24 18:43:56.967497123 +0000 UTC m=+0.053939303 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, container_name=multipathd)
Nov 24 13:43:57 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:43:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:43:59 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:44:01 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:44:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:44:03 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:44:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:44:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:44:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:44:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:44:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:44:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:44:05 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:45:15 np0005533938 rsyslogd[1008]: imjournal: 594 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 24 13:45:15 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v881: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:45:17 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v882: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:45:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:45:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 13:45:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1708866176' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 13:45:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 13:45:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1708866176' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 13:45:19 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v883: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:45:21 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v884: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:45:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:45:22.739 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:45:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:45:22.740 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:45:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:45:22.740 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:45:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:45:23 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v885: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:45:25 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v886: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:45:27 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v887: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:45:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:45:27 np0005533938 podman[273159]: 2025-11-24 18:45:27.98027755 +0000 UTC m=+0.079030348 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 24 13:45:29 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v888: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:45:29 np0005533938 podman[273186]: 2025-11-24 18:45:29.992874459 +0000 UTC m=+0.090158700 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 13:45:30 np0005533938 podman[273187]: 2025-11-24 18:45:30.02466265 +0000 UTC m=+0.107117501 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 13:45:31 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v889: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:45:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:45:33 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v890: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:45:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:45:34
Nov 24 13:45:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:45:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:45:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'cephfs.cephfs.data', 'volumes', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'vms', '.rgw.root', 'default.rgw.control']
Nov 24 13:45:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:45:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:45:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:45:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:45:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:45:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:45:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:45:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:45:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:45:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:45:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:45:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:45:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:45:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:45:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:45:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:45:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:45:35 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v891: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:45:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:45:37 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:45:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:45:37 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:45:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:45:37 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:45:37 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 411f0c84-1c9d-4dfe-b968-ee29d48aef52 does not exist
Nov 24 13:45:37 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 9468b2a7-fbbb-4a53-a948-14dc81bea8fd does not exist
Nov 24 13:45:37 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 2a808531-6f0a-41a5-9f6e-914fb8df27d1 does not exist
Nov 24 13:45:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:45:37 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:45:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:45:37 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:45:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:45:37 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:45:37 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v892: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:45:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:45:37 np0005533938 podman[273495]: 2025-11-24 18:45:37.932634259 +0000 UTC m=+0.045384643 container create ebd3109ae11276de540c6175ee9d845083c63035ad2d8740056d4f5974839f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 13:45:37 np0005533938 systemd[1]: Started libpod-conmon-ebd3109ae11276de540c6175ee9d845083c63035ad2d8740056d4f5974839f43.scope.
Nov 24 13:45:37 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:45:38 np0005533938 podman[273495]: 2025-11-24 18:45:37.910590788 +0000 UTC m=+0.023341252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:45:38 np0005533938 podman[273495]: 2025-11-24 18:45:38.007896257 +0000 UTC m=+0.120646691 container init ebd3109ae11276de540c6175ee9d845083c63035ad2d8740056d4f5974839f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:45:38 np0005533938 podman[273495]: 2025-11-24 18:45:38.014173935 +0000 UTC m=+0.126924329 container start ebd3109ae11276de540c6175ee9d845083c63035ad2d8740056d4f5974839f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bohr, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 13:45:38 np0005533938 podman[273495]: 2025-11-24 18:45:38.01693576 +0000 UTC m=+0.129686164 container attach ebd3109ae11276de540c6175ee9d845083c63035ad2d8740056d4f5974839f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bohr, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:45:38 np0005533938 hardcore_bohr[273512]: 167 167
Nov 24 13:45:38 np0005533938 systemd[1]: libpod-ebd3109ae11276de540c6175ee9d845083c63035ad2d8740056d4f5974839f43.scope: Deactivated successfully.
Nov 24 13:45:38 np0005533938 podman[273495]: 2025-11-24 18:45:38.018882856 +0000 UTC m=+0.131633250 container died ebd3109ae11276de540c6175ee9d845083c63035ad2d8740056d4f5974839f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bohr, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 13:45:38 np0005533938 systemd[1]: var-lib-containers-storage-overlay-14ba781f9c67ad8886f58bfb7da2855477d5fdab092eab13ac5a69758b4143f6-merged.mount: Deactivated successfully.
Nov 24 13:45:38 np0005533938 podman[273495]: 2025-11-24 18:45:38.056530376 +0000 UTC m=+0.169280760 container remove ebd3109ae11276de540c6175ee9d845083c63035ad2d8740056d4f5974839f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bohr, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:45:38 np0005533938 systemd[1]: libpod-conmon-ebd3109ae11276de540c6175ee9d845083c63035ad2d8740056d4f5974839f43.scope: Deactivated successfully.
Nov 24 13:45:38 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:45:38 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:45:38 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:45:38 np0005533938 podman[273534]: 2025-11-24 18:45:38.216468773 +0000 UTC m=+0.041147393 container create cbe7adc79494e7dcc40c42914770b7888169b7404023906132a4652a8ca267bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 13:45:38 np0005533938 systemd[1]: Started libpod-conmon-cbe7adc79494e7dcc40c42914770b7888169b7404023906132a4652a8ca267bb.scope.
Nov 24 13:45:38 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:45:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb8a0a7fc4a1c810b5d94feed7a0d2c8dfc71af5ec80d7a58d7e4639d63cdbd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:45:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb8a0a7fc4a1c810b5d94feed7a0d2c8dfc71af5ec80d7a58d7e4639d63cdbd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:45:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb8a0a7fc4a1c810b5d94feed7a0d2c8dfc71af5ec80d7a58d7e4639d63cdbd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:45:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb8a0a7fc4a1c810b5d94feed7a0d2c8dfc71af5ec80d7a58d7e4639d63cdbd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:45:38 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb8a0a7fc4a1c810b5d94feed7a0d2c8dfc71af5ec80d7a58d7e4639d63cdbd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:45:38 np0005533938 podman[273534]: 2025-11-24 18:45:38.200768163 +0000 UTC m=+0.025446813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:45:38 np0005533938 podman[273534]: 2025-11-24 18:45:38.298822729 +0000 UTC m=+0.123501439 container init cbe7adc79494e7dcc40c42914770b7888169b7404023906132a4652a8ca267bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_matsumoto, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:45:38 np0005533938 podman[273534]: 2025-11-24 18:45:38.305741902 +0000 UTC m=+0.130420522 container start cbe7adc79494e7dcc40c42914770b7888169b7404023906132a4652a8ca267bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:45:38 np0005533938 podman[273534]: 2025-11-24 18:45:38.308637861 +0000 UTC m=+0.133316481 container attach cbe7adc79494e7dcc40c42914770b7888169b7404023906132a4652a8ca267bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_matsumoto, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:45:39 np0005533938 elastic_matsumoto[273550]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:45:39 np0005533938 elastic_matsumoto[273550]: --> relative data size: 1.0
Nov 24 13:45:39 np0005533938 elastic_matsumoto[273550]: --> All data devices are unavailable
Nov 24 13:45:39 np0005533938 systemd[1]: libpod-cbe7adc79494e7dcc40c42914770b7888169b7404023906132a4652a8ca267bb.scope: Deactivated successfully.
Nov 24 13:45:39 np0005533938 podman[273534]: 2025-11-24 18:45:39.3458165 +0000 UTC m=+1.170495110 container died cbe7adc79494e7dcc40c42914770b7888169b7404023906132a4652a8ca267bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 13:45:39 np0005533938 systemd[1]: var-lib-containers-storage-overlay-0cb8a0a7fc4a1c810b5d94feed7a0d2c8dfc71af5ec80d7a58d7e4639d63cdbd-merged.mount: Deactivated successfully.
Nov 24 13:45:39 np0005533938 podman[273534]: 2025-11-24 18:45:39.399807015 +0000 UTC m=+1.224485635 container remove cbe7adc79494e7dcc40c42914770b7888169b7404023906132a4652a8ca267bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_matsumoto, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 24 13:45:39 np0005533938 systemd[1]: libpod-conmon-cbe7adc79494e7dcc40c42914770b7888169b7404023906132a4652a8ca267bb.scope: Deactivated successfully.
Nov 24 13:45:39 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v893: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:45:39 np0005533938 podman[273731]: 2025-11-24 18:45:39.928849191 +0000 UTC m=+0.035440028 container create fdcf0bff18a02a4f706472abd50d2761eea0e6dadc6ff9adfa430fe1c4faef37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 13:45:39 np0005533938 systemd[1]: Started libpod-conmon-fdcf0bff18a02a4f706472abd50d2761eea0e6dadc6ff9adfa430fe1c4faef37.scope.
Nov 24 13:45:39 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:45:40 np0005533938 podman[273731]: 2025-11-24 18:45:39.999819908 +0000 UTC m=+0.106410765 container init fdcf0bff18a02a4f706472abd50d2761eea0e6dadc6ff9adfa430fe1c4faef37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wiles, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 13:45:40 np0005533938 podman[273731]: 2025-11-24 18:45:40.006501095 +0000 UTC m=+0.113091932 container start fdcf0bff18a02a4f706472abd50d2761eea0e6dadc6ff9adfa430fe1c4faef37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wiles, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:45:40 np0005533938 blissful_wiles[273747]: 167 167
Nov 24 13:45:40 np0005533938 podman[273731]: 2025-11-24 18:45:40.010189412 +0000 UTC m=+0.116780249 container attach fdcf0bff18a02a4f706472abd50d2761eea0e6dadc6ff9adfa430fe1c4faef37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 13:45:40 np0005533938 podman[273731]: 2025-11-24 18:45:39.914318068 +0000 UTC m=+0.020908925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:45:40 np0005533938 systemd[1]: libpod-fdcf0bff18a02a4f706472abd50d2761eea0e6dadc6ff9adfa430fe1c4faef37.scope: Deactivated successfully.
Nov 24 13:45:40 np0005533938 podman[273731]: 2025-11-24 18:45:40.012422385 +0000 UTC m=+0.119013222 container died fdcf0bff18a02a4f706472abd50d2761eea0e6dadc6ff9adfa430fe1c4faef37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wiles, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:45:40 np0005533938 systemd[1]: var-lib-containers-storage-overlay-a420a30a07befc038b88c2fae075e7b515d41b445b663e51544e4f523dffe8f3-merged.mount: Deactivated successfully.
Nov 24 13:45:40 np0005533938 podman[273731]: 2025-11-24 18:45:40.045352023 +0000 UTC m=+0.151942860 container remove fdcf0bff18a02a4f706472abd50d2761eea0e6dadc6ff9adfa430fe1c4faef37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 13:45:40 np0005533938 systemd[1]: libpod-conmon-fdcf0bff18a02a4f706472abd50d2761eea0e6dadc6ff9adfa430fe1c4faef37.scope: Deactivated successfully.
Nov 24 13:45:40 np0005533938 podman[273772]: 2025-11-24 18:45:40.199821292 +0000 UTC m=+0.037840495 container create 42d6467a7f3d932e10492d75c8c7158811dc85fea1be3c5f181d8542434e7809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bassi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:45:40 np0005533938 systemd[1]: Started libpod-conmon-42d6467a7f3d932e10492d75c8c7158811dc85fea1be3c5f181d8542434e7809.scope.
Nov 24 13:45:40 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:45:40 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/183a5550692f556bf0a09bc4832b46a36f14a6b3e4a4e51154624d26393a5584/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:45:40 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/183a5550692f556bf0a09bc4832b46a36f14a6b3e4a4e51154624d26393a5584/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:45:40 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/183a5550692f556bf0a09bc4832b46a36f14a6b3e4a4e51154624d26393a5584/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:45:40 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/183a5550692f556bf0a09bc4832b46a36f14a6b3e4a4e51154624d26393a5584/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:45:40 np0005533938 podman[273772]: 2025-11-24 18:45:40.266134698 +0000 UTC m=+0.104153921 container init 42d6467a7f3d932e10492d75c8c7158811dc85fea1be3c5f181d8542434e7809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:45:40 np0005533938 podman[273772]: 2025-11-24 18:45:40.27892926 +0000 UTC m=+0.116948503 container start 42d6467a7f3d932e10492d75c8c7158811dc85fea1be3c5f181d8542434e7809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:45:40 np0005533938 podman[273772]: 2025-11-24 18:45:40.184884829 +0000 UTC m=+0.022904062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:45:40 np0005533938 podman[273772]: 2025-11-24 18:45:40.283036507 +0000 UTC m=+0.121055760 container attach 42d6467a7f3d932e10492d75c8c7158811dc85fea1be3c5f181d8542434e7809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 13:45:41 np0005533938 nice_bassi[273788]: {
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:    "0": [
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:        {
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "devices": [
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "/dev/loop3"
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            ],
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "lv_name": "ceph_lv0",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "lv_size": "21470642176",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "name": "ceph_lv0",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "tags": {
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.cluster_name": "ceph",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.crush_device_class": "",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.encrypted": "0",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.osd_id": "0",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.type": "block",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.vdo": "0"
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            },
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "type": "block",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "vg_name": "ceph_vg0"
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:        }
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:    ],
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:    "1": [
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:        {
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "devices": [
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "/dev/loop4"
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            ],
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "lv_name": "ceph_lv1",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "lv_size": "21470642176",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "name": "ceph_lv1",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "tags": {
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.cluster_name": "ceph",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.crush_device_class": "",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.encrypted": "0",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.osd_id": "1",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.type": "block",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.vdo": "0"
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            },
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "type": "block",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "vg_name": "ceph_vg1"
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:        }
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:    ],
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:    "2": [
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:        {
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "devices": [
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "/dev/loop5"
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            ],
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "lv_name": "ceph_lv2",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "lv_size": "21470642176",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "name": "ceph_lv2",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "tags": {
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.cluster_name": "ceph",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.crush_device_class": "",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.encrypted": "0",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.osd_id": "2",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.type": "block",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:                "ceph.vdo": "0"
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            },
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "type": "block",
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:            "vg_name": "ceph_vg2"
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:        }
Nov 24 13:45:41 np0005533938 nice_bassi[273788]:    ]
Nov 24 13:45:41 np0005533938 nice_bassi[273788]: }
Nov 24 13:45:41 np0005533938 systemd[1]: libpod-42d6467a7f3d932e10492d75c8c7158811dc85fea1be3c5f181d8542434e7809.scope: Deactivated successfully.
Nov 24 13:45:41 np0005533938 podman[273772]: 2025-11-24 18:45:41.059694343 +0000 UTC m=+0.897713546 container died 42d6467a7f3d932e10492d75c8c7158811dc85fea1be3c5f181d8542434e7809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bassi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:45:41 np0005533938 systemd[1]: var-lib-containers-storage-overlay-183a5550692f556bf0a09bc4832b46a36f14a6b3e4a4e51154624d26393a5584-merged.mount: Deactivated successfully.
Nov 24 13:45:41 np0005533938 podman[273772]: 2025-11-24 18:45:41.268335711 +0000 UTC m=+1.106354954 container remove 42d6467a7f3d932e10492d75c8c7158811dc85fea1be3c5f181d8542434e7809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bassi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:45:41 np0005533938 systemd[1]: libpod-conmon-42d6467a7f3d932e10492d75c8c7158811dc85fea1be3c5f181d8542434e7809.scope: Deactivated successfully.
Nov 24 13:45:41 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v894: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:45:41 np0005533938 podman[273950]: 2025-11-24 18:45:41.939391221 +0000 UTC m=+0.056039665 container create 501f763be93bcf20f6da8afb7822ad5de65e64145162427518bef4332d4aba31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_moser, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:45:41 np0005533938 systemd[1]: Started libpod-conmon-501f763be93bcf20f6da8afb7822ad5de65e64145162427518bef4332d4aba31.scope.
Nov 24 13:45:42 np0005533938 podman[273950]: 2025-11-24 18:45:41.914015271 +0000 UTC m=+0.030663735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:45:42 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:45:42 np0005533938 podman[273950]: 2025-11-24 18:45:42.039056365 +0000 UTC m=+0.155704819 container init 501f763be93bcf20f6da8afb7822ad5de65e64145162427518bef4332d4aba31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_moser, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 13:45:42 np0005533938 podman[273950]: 2025-11-24 18:45:42.044651417 +0000 UTC m=+0.161299841 container start 501f763be93bcf20f6da8afb7822ad5de65e64145162427518bef4332d4aba31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_moser, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 13:45:42 np0005533938 infallible_moser[273966]: 167 167
Nov 24 13:45:42 np0005533938 systemd[1]: libpod-501f763be93bcf20f6da8afb7822ad5de65e64145162427518bef4332d4aba31.scope: Deactivated successfully.
Nov 24 13:45:42 np0005533938 podman[273950]: 2025-11-24 18:45:42.06170051 +0000 UTC m=+0.178348954 container attach 501f763be93bcf20f6da8afb7822ad5de65e64145162427518bef4332d4aba31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_moser, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 24 13:45:42 np0005533938 podman[273950]: 2025-11-24 18:45:42.062095609 +0000 UTC m=+0.178744063 container died 501f763be93bcf20f6da8afb7822ad5de65e64145162427518bef4332d4aba31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_moser, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:45:42 np0005533938 systemd[1]: var-lib-containers-storage-overlay-86aa914a4de60c8d1667448339822273255def061b7c5585e4330992b3cb7a51-merged.mount: Deactivated successfully.
Nov 24 13:45:42 np0005533938 podman[273950]: 2025-11-24 18:45:42.205920497 +0000 UTC m=+0.322568931 container remove 501f763be93bcf20f6da8afb7822ad5de65e64145162427518bef4332d4aba31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 13:45:42 np0005533938 systemd[1]: libpod-conmon-501f763be93bcf20f6da8afb7822ad5de65e64145162427518bef4332d4aba31.scope: Deactivated successfully.
Nov 24 13:45:42 np0005533938 podman[273990]: 2025-11-24 18:45:42.366451528 +0000 UTC m=+0.041303986 container create c9dac9d4df657057640a0327a3ec2208c32e88eb8b6198a0f2904de2eee7a615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:45:42 np0005533938 systemd[1]: Started libpod-conmon-c9dac9d4df657057640a0327a3ec2208c32e88eb8b6198a0f2904de2eee7a615.scope.
Nov 24 13:45:42 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:45:42 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae8b84339254d873ad196c02f4e711550a5e9c9c0003ce01b6d10f27b93cda2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:45:42 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae8b84339254d873ad196c02f4e711550a5e9c9c0003ce01b6d10f27b93cda2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:45:42 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae8b84339254d873ad196c02f4e711550a5e9c9c0003ce01b6d10f27b93cda2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:45:42 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae8b84339254d873ad196c02f4e711550a5e9c9c0003ce01b6d10f27b93cda2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:45:42 np0005533938 podman[273990]: 2025-11-24 18:45:42.44104847 +0000 UTC m=+0.115900928 container init c9dac9d4df657057640a0327a3ec2208c32e88eb8b6198a0f2904de2eee7a615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_elgamal, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:45:42 np0005533938 podman[273990]: 2025-11-24 18:45:42.346023806 +0000 UTC m=+0.020876294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:45:42 np0005533938 podman[273990]: 2025-11-24 18:45:42.447459912 +0000 UTC m=+0.122312370 container start c9dac9d4df657057640a0327a3ec2208c32e88eb8b6198a0f2904de2eee7a615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_elgamal, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 13:45:42 np0005533938 podman[273990]: 2025-11-24 18:45:42.454694193 +0000 UTC m=+0.129546651 container attach c9dac9d4df657057640a0327a3ec2208c32e88eb8b6198a0f2904de2eee7a615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_elgamal, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:45:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:45:43 np0005533938 boring_elgamal[274006]: {
Nov 24 13:45:43 np0005533938 boring_elgamal[274006]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:45:43 np0005533938 boring_elgamal[274006]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:45:43 np0005533938 boring_elgamal[274006]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:45:43 np0005533938 boring_elgamal[274006]:        "osd_id": 0,
Nov 24 13:45:43 np0005533938 boring_elgamal[274006]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:45:43 np0005533938 boring_elgamal[274006]:        "type": "bluestore"
Nov 24 13:45:43 np0005533938 boring_elgamal[274006]:    },
Nov 24 13:45:43 np0005533938 boring_elgamal[274006]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:45:43 np0005533938 boring_elgamal[274006]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:45:43 np0005533938 boring_elgamal[274006]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:45:43 np0005533938 boring_elgamal[274006]:        "osd_id": 1,
Nov 24 13:45:43 np0005533938 boring_elgamal[274006]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:45:43 np0005533938 boring_elgamal[274006]:        "type": "bluestore"
Nov 24 13:45:43 np0005533938 boring_elgamal[274006]:    },
Nov 24 13:45:43 np0005533938 boring_elgamal[274006]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:45:43 np0005533938 boring_elgamal[274006]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:45:43 np0005533938 boring_elgamal[274006]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:45:43 np0005533938 boring_elgamal[274006]:        "osd_id": 2,
Nov 24 13:45:43 np0005533938 boring_elgamal[274006]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:45:43 np0005533938 boring_elgamal[274006]:        "type": "bluestore"
Nov 24 13:45:43 np0005533938 boring_elgamal[274006]:    }
Nov 24 13:45:43 np0005533938 boring_elgamal[274006]: }
Nov 24 13:45:43 np0005533938 systemd[1]: libpod-c9dac9d4df657057640a0327a3ec2208c32e88eb8b6198a0f2904de2eee7a615.scope: Deactivated successfully.
Nov 24 13:45:43 np0005533938 podman[273990]: 2025-11-24 18:45:43.344202974 +0000 UTC m=+1.019055432 container died c9dac9d4df657057640a0327a3ec2208c32e88eb8b6198a0f2904de2eee7a615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_elgamal, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 13:45:43 np0005533938 systemd[1]: var-lib-containers-storage-overlay-3ae8b84339254d873ad196c02f4e711550a5e9c9c0003ce01b6d10f27b93cda2-merged.mount: Deactivated successfully.
Nov 24 13:45:43 np0005533938 podman[273990]: 2025-11-24 18:45:43.396221792 +0000 UTC m=+1.071074250 container remove c9dac9d4df657057640a0327a3ec2208c32e88eb8b6198a0f2904de2eee7a615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_elgamal, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:45:43 np0005533938 systemd[1]: libpod-conmon-c9dac9d4df657057640a0327a3ec2208c32e88eb8b6198a0f2904de2eee7a615.scope: Deactivated successfully.
Nov 24 13:45:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:45:43 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:45:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:45:43 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 71637212-7ee6-4e5c-9c1d-3a14e86fa32b does not exist
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 455f0bbb-053b-493c-920d-3499c2d03e2a does not exist
Nov 24 13:45:43 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v895: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:45:44 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:45:44 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:45:45 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v896: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:45:47 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v897: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:45:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:45:49 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v898: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:45:51 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v899: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:45:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:45:53 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v900: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:45:55 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v901: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:45:57 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v902: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:45:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:45:59 np0005533938 podman[274102]: 2025-11-24 18:45:59.051984367 +0000 UTC m=+0.136103205 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 13:45:59 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v903: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:00 np0005533938 podman[274130]: 2025-11-24 18:46:00.981023722 +0000 UTC m=+0.070684911 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 24 13:46:00 np0005533938 podman[274129]: 2025-11-24 18:46:00.981417081 +0000 UTC m=+0.065749204 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:46:01 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v904: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:46:03 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v905: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:46:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:46:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:46:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:46:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:46:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:46:05 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v906: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:07 np0005533938 nova_compute[270693]: 2025-11-24 18:46:07.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:46:07 np0005533938 nova_compute[270693]: 2025-11-24 18:46:07.530 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 13:46:07 np0005533938 nova_compute[270693]: 2025-11-24 18:46:07.530 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 13:46:07 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v907: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:07 np0005533938 nova_compute[270693]: 2025-11-24 18:46:07.550 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 13:46:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:46:08 np0005533938 nova_compute[270693]: 2025-11-24 18:46:08.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:46:08 np0005533938 nova_compute[270693]: 2025-11-24 18:46:08.565 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:46:08 np0005533938 nova_compute[270693]: 2025-11-24 18:46:08.565 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:46:08 np0005533938 nova_compute[270693]: 2025-11-24 18:46:08.565 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:46:08 np0005533938 nova_compute[270693]: 2025-11-24 18:46:08.566 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 13:46:08 np0005533938 nova_compute[270693]: 2025-11-24 18:46:08.566 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:46:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:46:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/731849222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:46:09 np0005533938 nova_compute[270693]: 2025-11-24 18:46:09.035 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:46:09 np0005533938 nova_compute[270693]: 2025-11-24 18:46:09.253 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 13:46:09 np0005533938 nova_compute[270693]: 2025-11-24 18:46:09.254 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5145MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 13:46:09 np0005533938 nova_compute[270693]: 2025-11-24 18:46:09.255 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:46:09 np0005533938 nova_compute[270693]: 2025-11-24 18:46:09.255 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:46:09 np0005533938 nova_compute[270693]: 2025-11-24 18:46:09.323 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 13:46:09 np0005533938 nova_compute[270693]: 2025-11-24 18:46:09.324 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 13:46:09 np0005533938 nova_compute[270693]: 2025-11-24 18:46:09.341 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:46:09 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v908: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:46:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1106926518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:46:09 np0005533938 nova_compute[270693]: 2025-11-24 18:46:09.776 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:46:09 np0005533938 nova_compute[270693]: 2025-11-24 18:46:09.781 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 13:46:09 np0005533938 nova_compute[270693]: 2025-11-24 18:46:09.801 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 13:46:09 np0005533938 nova_compute[270693]: 2025-11-24 18:46:09.803 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 13:46:09 np0005533938 nova_compute[270693]: 2025-11-24 18:46:09.803 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:46:10 np0005533938 nova_compute[270693]: 2025-11-24 18:46:10.804 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:46:10 np0005533938 nova_compute[270693]: 2025-11-24 18:46:10.804 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:46:10 np0005533938 nova_compute[270693]: 2025-11-24 18:46:10.804 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:46:10 np0005533938 nova_compute[270693]: 2025-11-24 18:46:10.805 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:46:10 np0005533938 nova_compute[270693]: 2025-11-24 18:46:10.805 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:46:10 np0005533938 nova_compute[270693]: 2025-11-24 18:46:10.805 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 13:46:11 np0005533938 nova_compute[270693]: 2025-11-24 18:46:11.524 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:46:11 np0005533938 nova_compute[270693]: 2025-11-24 18:46:11.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:46:11 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v909: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:46:13 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v910: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:15 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v911: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:17 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v912: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.840633) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009977840719, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 770, "num_deletes": 257, "total_data_size": 960747, "memory_usage": 974856, "flush_reason": "Manual Compaction"}
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009977853968, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 951989, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18736, "largest_seqno": 19505, "table_properties": {"data_size": 948092, "index_size": 1677, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8356, "raw_average_key_size": 18, "raw_value_size": 940170, "raw_average_value_size": 2052, "num_data_blocks": 76, "num_entries": 458, "num_filter_entries": 458, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764009912, "oldest_key_time": 1764009912, "file_creation_time": 1764009977, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 13364 microseconds, and 4161 cpu microseconds.
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.854011) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 951989 bytes OK
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.854029) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.855681) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.855695) EVENT_LOG_v1 {"time_micros": 1764009977855691, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.855713) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 956828, prev total WAL file size 956828, number of live WAL files 2.
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.856336) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(929KB)], [44(6083KB)]
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009977856390, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7181640, "oldest_snapshot_seqno": -1}
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4131 keys, 7043795 bytes, temperature: kUnknown
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009977902950, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7043795, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7015662, "index_size": 16695, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10373, "raw_key_size": 102387, "raw_average_key_size": 24, "raw_value_size": 6940305, "raw_average_value_size": 1680, "num_data_blocks": 702, "num_entries": 4131, "num_filter_entries": 4131, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764009977, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.903154) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7043795 bytes
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.904630) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.1 rd, 151.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 5.9 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(14.9) write-amplify(7.4) OK, records in: 4657, records dropped: 526 output_compression: NoCompression
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.904646) EVENT_LOG_v1 {"time_micros": 1764009977904638, "job": 22, "event": "compaction_finished", "compaction_time_micros": 46616, "compaction_time_cpu_micros": 19118, "output_level": 6, "num_output_files": 1, "total_output_size": 7043795, "num_input_records": 4657, "num_output_records": 4131, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009977904876, "job": 22, "event": "table_file_deletion", "file_number": 46}
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764009977905795, "job": 22, "event": "table_file_deletion", "file_number": 44}
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.856240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.905937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.905947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.905951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.905955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:46:17 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:46:17.905959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:46:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 13:46:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2597587750' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 13:46:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 13:46:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2597587750' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 13:46:19 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v913: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:21 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v914: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:46:22.740 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:46:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:46:22.740 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:46:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:46:22.741 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:46:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:46:23 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v915: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:25 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v916: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:27 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v917: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:46:29 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v918: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:30 np0005533938 podman[274213]: 2025-11-24 18:46:30.029924735 +0000 UTC m=+0.114230539 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 24 13:46:31 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v919: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:31 np0005533938 podman[274239]: 2025-11-24 18:46:31.963198315 +0000 UTC m=+0.059574401 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 24 13:46:31 np0005533938 podman[274240]: 2025-11-24 18:46:31.994159433 +0000 UTC m=+0.077538971 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3)
Nov 24 13:46:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:46:33 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v920: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:46:34
Nov 24 13:46:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:46:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:46:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'backups', '.rgw.root', 'vms', 'default.rgw.log', '.mgr', 'default.rgw.control', 'default.rgw.meta']
Nov 24 13:46:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:46:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:46:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:46:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:46:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:46:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:46:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:46:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:46:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:46:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:46:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:46:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:46:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:46:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:46:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:46:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:46:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:46:35 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v921: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:37 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v922: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:46:39 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v923: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:41 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v924: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:46:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:46:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:46:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:46:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:46:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:46:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:46:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:46:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:46:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:46:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:46:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:46:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:46:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:46:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:46:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:46:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:46:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:46:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:46:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:46:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:46:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:46:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:46:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:46:43 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v925: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:46:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:46:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:46:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:46:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:46:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:46:44 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 7f44ac36-4f13-4351-a42c-cb69328a91e6 does not exist
Nov 24 13:46:44 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev d260362f-74ad-431f-8abc-e579c7260a56 does not exist
Nov 24 13:46:44 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 8cdba7ff-6240-4a44-adc6-61ea24add6de does not exist
Nov 24 13:46:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:46:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:46:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:46:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:46:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:46:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:46:44 np0005533938 podman[274549]: 2025-11-24 18:46:44.941293425 +0000 UTC m=+0.048216902 container create 4cffcd97e3e589f5813fe0fd154c29a7efccd57c01c317a38e07cecdf22fa255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:46:44 np0005533938 systemd[1]: Started libpod-conmon-4cffcd97e3e589f5813fe0fd154c29a7efccd57c01c317a38e07cecdf22fa255.scope.
Nov 24 13:46:45 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:46:45 np0005533938 podman[274549]: 2025-11-24 18:46:44.919696696 +0000 UTC m=+0.026620183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:46:45 np0005533938 podman[274549]: 2025-11-24 18:46:45.016365714 +0000 UTC m=+0.123289181 container init 4cffcd97e3e589f5813fe0fd154c29a7efccd57c01c317a38e07cecdf22fa255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dirac, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:46:45 np0005533938 podman[274549]: 2025-11-24 18:46:45.027743173 +0000 UTC m=+0.134666660 container start 4cffcd97e3e589f5813fe0fd154c29a7efccd57c01c317a38e07cecdf22fa255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 13:46:45 np0005533938 podman[274549]: 2025-11-24 18:46:45.032087789 +0000 UTC m=+0.139011236 container attach 4cffcd97e3e589f5813fe0fd154c29a7efccd57c01c317a38e07cecdf22fa255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dirac, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 13:46:45 np0005533938 hungry_dirac[274565]: 167 167
Nov 24 13:46:45 np0005533938 systemd[1]: libpod-4cffcd97e3e589f5813fe0fd154c29a7efccd57c01c317a38e07cecdf22fa255.scope: Deactivated successfully.
Nov 24 13:46:45 np0005533938 conmon[274565]: conmon 4cffcd97e3e589f5813f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4cffcd97e3e589f5813fe0fd154c29a7efccd57c01c317a38e07cecdf22fa255.scope/container/memory.events
Nov 24 13:46:45 np0005533938 podman[274570]: 2025-11-24 18:46:45.079276985 +0000 UTC m=+0.027815322 container died 4cffcd97e3e589f5813fe0fd154c29a7efccd57c01c317a38e07cecdf22fa255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dirac, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 24 13:46:45 np0005533938 systemd[1]: var-lib-containers-storage-overlay-869ca16263465559a0f21f82a5e996cc261ffdc1c0e74a0323dbbe8a7553b436-merged.mount: Deactivated successfully.
Nov 24 13:46:45 np0005533938 podman[274570]: 2025-11-24 18:46:45.126395219 +0000 UTC m=+0.074933536 container remove 4cffcd97e3e589f5813fe0fd154c29a7efccd57c01c317a38e07cecdf22fa255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:46:45 np0005533938 systemd[1]: libpod-conmon-4cffcd97e3e589f5813fe0fd154c29a7efccd57c01c317a38e07cecdf22fa255.scope: Deactivated successfully.
Nov 24 13:46:45 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:46:45 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:46:45 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:46:45 np0005533938 podman[274592]: 2025-11-24 18:46:45.335941073 +0000 UTC m=+0.052966719 container create ff6ef4af521549f08265d01f825b31a3813e18c3d1183cd0abe76e38959d5c4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lalande, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:46:45 np0005533938 systemd[1]: Started libpod-conmon-ff6ef4af521549f08265d01f825b31a3813e18c3d1183cd0abe76e38959d5c4c.scope.
Nov 24 13:46:45 np0005533938 podman[274592]: 2025-11-24 18:46:45.308707305 +0000 UTC m=+0.025733001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:46:45 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:46:45 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394577e4915ec5ba8603cf4bd1ed6a1684d68868a222ebe710116815c2b3159a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:46:45 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394577e4915ec5ba8603cf4bd1ed6a1684d68868a222ebe710116815c2b3159a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:46:45 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394577e4915ec5ba8603cf4bd1ed6a1684d68868a222ebe710116815c2b3159a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:46:45 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394577e4915ec5ba8603cf4bd1ed6a1684d68868a222ebe710116815c2b3159a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:46:45 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394577e4915ec5ba8603cf4bd1ed6a1684d68868a222ebe710116815c2b3159a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:46:45 np0005533938 podman[274592]: 2025-11-24 18:46:45.427461125 +0000 UTC m=+0.144486751 container init ff6ef4af521549f08265d01f825b31a3813e18c3d1183cd0abe76e38959d5c4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lalande, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 13:46:45 np0005533938 podman[274592]: 2025-11-24 18:46:45.44154849 +0000 UTC m=+0.158574096 container start ff6ef4af521549f08265d01f825b31a3813e18c3d1183cd0abe76e38959d5c4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lalande, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 24 13:46:45 np0005533938 podman[274592]: 2025-11-24 18:46:45.445415715 +0000 UTC m=+0.162441361 container attach ff6ef4af521549f08265d01f825b31a3813e18c3d1183cd0abe76e38959d5c4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:46:45 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v926: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:46 np0005533938 stoic_lalande[274608]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:46:46 np0005533938 stoic_lalande[274608]: --> relative data size: 1.0
Nov 24 13:46:46 np0005533938 stoic_lalande[274608]: --> All data devices are unavailable
Nov 24 13:46:46 np0005533938 systemd[1]: libpod-ff6ef4af521549f08265d01f825b31a3813e18c3d1183cd0abe76e38959d5c4c.scope: Deactivated successfully.
Nov 24 13:46:46 np0005533938 systemd[1]: libpod-ff6ef4af521549f08265d01f825b31a3813e18c3d1183cd0abe76e38959d5c4c.scope: Consumed 1.023s CPU time.
Nov 24 13:46:46 np0005533938 podman[274592]: 2025-11-24 18:46:46.517371784 +0000 UTC m=+1.234397400 container died ff6ef4af521549f08265d01f825b31a3813e18c3d1183cd0abe76e38959d5c4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lalande, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:46:46 np0005533938 systemd[1]: var-lib-containers-storage-overlay-394577e4915ec5ba8603cf4bd1ed6a1684d68868a222ebe710116815c2b3159a-merged.mount: Deactivated successfully.
Nov 24 13:46:46 np0005533938 podman[274592]: 2025-11-24 18:46:46.582742455 +0000 UTC m=+1.299768071 container remove ff6ef4af521549f08265d01f825b31a3813e18c3d1183cd0abe76e38959d5c4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lalande, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 13:46:46 np0005533938 systemd[1]: libpod-conmon-ff6ef4af521549f08265d01f825b31a3813e18c3d1183cd0abe76e38959d5c4c.scope: Deactivated successfully.
Nov 24 13:46:47 np0005533938 podman[274790]: 2025-11-24 18:46:47.199191477 +0000 UTC m=+0.041839546 container create f54900a85880df485b37bcc7617e4e4f66dc08e7f66214549fea29ebd57caa01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shirley, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:46:47 np0005533938 systemd[1]: Started libpod-conmon-f54900a85880df485b37bcc7617e4e4f66dc08e7f66214549fea29ebd57caa01.scope.
Nov 24 13:46:47 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:46:47 np0005533938 podman[274790]: 2025-11-24 18:46:47.272711678 +0000 UTC m=+0.115359807 container init f54900a85880df485b37bcc7617e4e4f66dc08e7f66214549fea29ebd57caa01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shirley, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:46:47 np0005533938 podman[274790]: 2025-11-24 18:46:47.27810557 +0000 UTC m=+0.120753679 container start f54900a85880df485b37bcc7617e4e4f66dc08e7f66214549fea29ebd57caa01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:46:47 np0005533938 podman[274790]: 2025-11-24 18:46:47.185295696 +0000 UTC m=+0.027943785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:46:47 np0005533938 podman[274790]: 2025-11-24 18:46:47.282122838 +0000 UTC m=+0.124771007 container attach f54900a85880df485b37bcc7617e4e4f66dc08e7f66214549fea29ebd57caa01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 13:46:47 np0005533938 sleepy_shirley[274806]: 167 167
Nov 24 13:46:47 np0005533938 systemd[1]: libpod-f54900a85880df485b37bcc7617e4e4f66dc08e7f66214549fea29ebd57caa01.scope: Deactivated successfully.
Nov 24 13:46:47 np0005533938 podman[274790]: 2025-11-24 18:46:47.285748437 +0000 UTC m=+0.128396546 container died f54900a85880df485b37bcc7617e4e4f66dc08e7f66214549fea29ebd57caa01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:46:47 np0005533938 systemd[1]: var-lib-containers-storage-overlay-1ed4d6dcc735b443e0d1f8662a77c6560e3df1b7731878c36e1066eb8d9367c7-merged.mount: Deactivated successfully.
Nov 24 13:46:47 np0005533938 podman[274790]: 2025-11-24 18:46:47.335215149 +0000 UTC m=+0.177863258 container remove f54900a85880df485b37bcc7617e4e4f66dc08e7f66214549fea29ebd57caa01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shirley, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 13:46:47 np0005533938 systemd[1]: libpod-conmon-f54900a85880df485b37bcc7617e4e4f66dc08e7f66214549fea29ebd57caa01.scope: Deactivated successfully.
Nov 24 13:46:47 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v927: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:47 np0005533938 podman[274830]: 2025-11-24 18:46:47.571137549 +0000 UTC m=+0.055675595 container create 567cf9b1fcc1669011a8cad25c73a46133197c5dd7847ee3c054d31a6eb16e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:46:47 np0005533938 systemd[1]: Started libpod-conmon-567cf9b1fcc1669011a8cad25c73a46133197c5dd7847ee3c054d31a6eb16e79.scope.
Nov 24 13:46:47 np0005533938 podman[274830]: 2025-11-24 18:46:47.545616903 +0000 UTC m=+0.030154999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:46:47 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:46:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93a5aecd8a923dfbe4a1e996eab4072052ba6a26c68ec39c21c2fae60a4ec8a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:46:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93a5aecd8a923dfbe4a1e996eab4072052ba6a26c68ec39c21c2fae60a4ec8a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:46:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93a5aecd8a923dfbe4a1e996eab4072052ba6a26c68ec39c21c2fae60a4ec8a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:46:47 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93a5aecd8a923dfbe4a1e996eab4072052ba6a26c68ec39c21c2fae60a4ec8a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:46:47 np0005533938 podman[274830]: 2025-11-24 18:46:47.672806989 +0000 UTC m=+0.157345005 container init 567cf9b1fcc1669011a8cad25c73a46133197c5dd7847ee3c054d31a6eb16e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 13:46:47 np0005533938 podman[274830]: 2025-11-24 18:46:47.679436892 +0000 UTC m=+0.163974908 container start 567cf9b1fcc1669011a8cad25c73a46133197c5dd7847ee3c054d31a6eb16e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Nov 24 13:46:47 np0005533938 podman[274830]: 2025-11-24 18:46:47.682328262 +0000 UTC m=+0.166866268 container attach 567cf9b1fcc1669011a8cad25c73a46133197c5dd7847ee3c054d31a6eb16e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:46:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]: {
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:    "0": [
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:        {
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "devices": [
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "/dev/loop3"
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            ],
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "lv_name": "ceph_lv0",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "lv_size": "21470642176",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "name": "ceph_lv0",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "tags": {
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.cluster_name": "ceph",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.crush_device_class": "",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.encrypted": "0",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.osd_id": "0",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.type": "block",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.vdo": "0"
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            },
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "type": "block",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "vg_name": "ceph_vg0"
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:        }
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:    ],
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:    "1": [
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:        {
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "devices": [
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "/dev/loop4"
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            ],
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "lv_name": "ceph_lv1",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "lv_size": "21470642176",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "name": "ceph_lv1",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "tags": {
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.cluster_name": "ceph",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.crush_device_class": "",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.encrypted": "0",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.osd_id": "1",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.type": "block",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.vdo": "0"
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            },
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "type": "block",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "vg_name": "ceph_vg1"
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:        }
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:    ],
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:    "2": [
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:        {
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "devices": [
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "/dev/loop5"
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            ],
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "lv_name": "ceph_lv2",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "lv_size": "21470642176",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "name": "ceph_lv2",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "tags": {
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.cluster_name": "ceph",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.crush_device_class": "",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.encrypted": "0",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.osd_id": "2",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.type": "block",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:                "ceph.vdo": "0"
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            },
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "type": "block",
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:            "vg_name": "ceph_vg2"
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:        }
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]:    ]
Nov 24 13:46:48 np0005533938 quizzical_cerf[274847]: }
Nov 24 13:46:48 np0005533938 systemd[1]: libpod-567cf9b1fcc1669011a8cad25c73a46133197c5dd7847ee3c054d31a6eb16e79.scope: Deactivated successfully.
Nov 24 13:46:48 np0005533938 podman[274830]: 2025-11-24 18:46:48.360786473 +0000 UTC m=+0.845324529 container died 567cf9b1fcc1669011a8cad25c73a46133197c5dd7847ee3c054d31a6eb16e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:46:48 np0005533938 systemd[1]: var-lib-containers-storage-overlay-93a5aecd8a923dfbe4a1e996eab4072052ba6a26c68ec39c21c2fae60a4ec8a3-merged.mount: Deactivated successfully.
Nov 24 13:46:48 np0005533938 podman[274830]: 2025-11-24 18:46:48.420925666 +0000 UTC m=+0.905463682 container remove 567cf9b1fcc1669011a8cad25c73a46133197c5dd7847ee3c054d31a6eb16e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cerf, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:46:48 np0005533938 systemd[1]: libpod-conmon-567cf9b1fcc1669011a8cad25c73a46133197c5dd7847ee3c054d31a6eb16e79.scope: Deactivated successfully.
Nov 24 13:46:49 np0005533938 podman[275006]: 2025-11-24 18:46:49.050642203 +0000 UTC m=+0.041488078 container create bd83a950f954b90e8dd8723408e998c6d56ae3a5e6a8c21b421030ec3cdae2fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_northcutt, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:46:49 np0005533938 systemd[1]: Started libpod-conmon-bd83a950f954b90e8dd8723408e998c6d56ae3a5e6a8c21b421030ec3cdae2fe.scope.
Nov 24 13:46:49 np0005533938 podman[275006]: 2025-11-24 18:46:49.034570349 +0000 UTC m=+0.025416244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:46:49 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:46:49 np0005533938 podman[275006]: 2025-11-24 18:46:49.15052831 +0000 UTC m=+0.141374285 container init bd83a950f954b90e8dd8723408e998c6d56ae3a5e6a8c21b421030ec3cdae2fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:46:49 np0005533938 podman[275006]: 2025-11-24 18:46:49.161416157 +0000 UTC m=+0.152262072 container start bd83a950f954b90e8dd8723408e998c6d56ae3a5e6a8c21b421030ec3cdae2fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:46:49 np0005533938 podman[275006]: 2025-11-24 18:46:49.165520747 +0000 UTC m=+0.156366662 container attach bd83a950f954b90e8dd8723408e998c6d56ae3a5e6a8c21b421030ec3cdae2fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 13:46:49 np0005533938 cranky_northcutt[275023]: 167 167
Nov 24 13:46:49 np0005533938 systemd[1]: libpod-bd83a950f954b90e8dd8723408e998c6d56ae3a5e6a8c21b421030ec3cdae2fe.scope: Deactivated successfully.
Nov 24 13:46:49 np0005533938 conmon[275023]: conmon bd83a950f954b90e8dd8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd83a950f954b90e8dd8723408e998c6d56ae3a5e6a8c21b421030ec3cdae2fe.scope/container/memory.events
Nov 24 13:46:49 np0005533938 podman[275006]: 2025-11-24 18:46:49.169855843 +0000 UTC m=+0.160701758 container died bd83a950f954b90e8dd8723408e998c6d56ae3a5e6a8c21b421030ec3cdae2fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:46:49 np0005533938 systemd[1]: var-lib-containers-storage-overlay-433157aaf25ede8b015efd89bcef81b4ae5cc4d1ad5d1f50f446de74755bce25-merged.mount: Deactivated successfully.
Nov 24 13:46:49 np0005533938 podman[275006]: 2025-11-24 18:46:49.223155589 +0000 UTC m=+0.214001504 container remove bd83a950f954b90e8dd8723408e998c6d56ae3a5e6a8c21b421030ec3cdae2fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 13:46:49 np0005533938 systemd[1]: libpod-conmon-bd83a950f954b90e8dd8723408e998c6d56ae3a5e6a8c21b421030ec3cdae2fe.scope: Deactivated successfully.
Nov 24 13:46:49 np0005533938 podman[275050]: 2025-11-24 18:46:49.395404078 +0000 UTC m=+0.043228000 container create d2c276ec93ddf45ac491bb801a5af11849fde3590791da5b1f37b8016ee8b600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:46:49 np0005533938 systemd[1]: Started libpod-conmon-d2c276ec93ddf45ac491bb801a5af11849fde3590791da5b1f37b8016ee8b600.scope.
Nov 24 13:46:49 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:46:49 np0005533938 podman[275050]: 2025-11-24 18:46:49.379463257 +0000 UTC m=+0.027287189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:46:49 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1182e9da104e2a353b61fbe05cf4d9359ac601737ae2a5d418ee8821fc169a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:46:49 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1182e9da104e2a353b61fbe05cf4d9359ac601737ae2a5d418ee8821fc169a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:46:49 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1182e9da104e2a353b61fbe05cf4d9359ac601737ae2a5d418ee8821fc169a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:46:49 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1182e9da104e2a353b61fbe05cf4d9359ac601737ae2a5d418ee8821fc169a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:46:49 np0005533938 podman[275050]: 2025-11-24 18:46:49.495349446 +0000 UTC m=+0.143173398 container init d2c276ec93ddf45ac491bb801a5af11849fde3590791da5b1f37b8016ee8b600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_faraday, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 13:46:49 np0005533938 podman[275050]: 2025-11-24 18:46:49.503554867 +0000 UTC m=+0.151378819 container start d2c276ec93ddf45ac491bb801a5af11849fde3590791da5b1f37b8016ee8b600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_faraday, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 13:46:49 np0005533938 podman[275050]: 2025-11-24 18:46:49.507803761 +0000 UTC m=+0.155627703 container attach d2c276ec93ddf45ac491bb801a5af11849fde3590791da5b1f37b8016ee8b600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:46:49 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v928: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:50 np0005533938 zealous_faraday[275066]: {
Nov 24 13:46:50 np0005533938 zealous_faraday[275066]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:46:50 np0005533938 zealous_faraday[275066]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:46:50 np0005533938 zealous_faraday[275066]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:46:50 np0005533938 zealous_faraday[275066]:        "osd_id": 0,
Nov 24 13:46:50 np0005533938 zealous_faraday[275066]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:46:50 np0005533938 zealous_faraday[275066]:        "type": "bluestore"
Nov 24 13:46:50 np0005533938 zealous_faraday[275066]:    },
Nov 24 13:46:50 np0005533938 zealous_faraday[275066]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:46:50 np0005533938 zealous_faraday[275066]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:46:50 np0005533938 zealous_faraday[275066]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:46:50 np0005533938 zealous_faraday[275066]:        "osd_id": 1,
Nov 24 13:46:50 np0005533938 zealous_faraday[275066]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:46:50 np0005533938 zealous_faraday[275066]:        "type": "bluestore"
Nov 24 13:46:50 np0005533938 zealous_faraday[275066]:    },
Nov 24 13:46:50 np0005533938 zealous_faraday[275066]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:46:50 np0005533938 zealous_faraday[275066]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:46:50 np0005533938 zealous_faraday[275066]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:46:50 np0005533938 zealous_faraday[275066]:        "osd_id": 2,
Nov 24 13:46:50 np0005533938 zealous_faraday[275066]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:46:50 np0005533938 zealous_faraday[275066]:        "type": "bluestore"
Nov 24 13:46:50 np0005533938 zealous_faraday[275066]:    }
Nov 24 13:46:50 np0005533938 zealous_faraday[275066]: }
Nov 24 13:46:50 np0005533938 systemd[1]: libpod-d2c276ec93ddf45ac491bb801a5af11849fde3590791da5b1f37b8016ee8b600.scope: Deactivated successfully.
Nov 24 13:46:50 np0005533938 systemd[1]: libpod-d2c276ec93ddf45ac491bb801a5af11849fde3590791da5b1f37b8016ee8b600.scope: Consumed 1.035s CPU time.
Nov 24 13:46:50 np0005533938 podman[275050]: 2025-11-24 18:46:50.532206847 +0000 UTC m=+1.180030809 container died d2c276ec93ddf45ac491bb801a5af11849fde3590791da5b1f37b8016ee8b600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_faraday, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 24 13:46:50 np0005533938 systemd[1]: var-lib-containers-storage-overlay-fc1182e9da104e2a353b61fbe05cf4d9359ac601737ae2a5d418ee8821fc169a-merged.mount: Deactivated successfully.
Nov 24 13:46:50 np0005533938 podman[275050]: 2025-11-24 18:46:50.604154389 +0000 UTC m=+1.251978331 container remove d2c276ec93ddf45ac491bb801a5af11849fde3590791da5b1f37b8016ee8b600 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:46:50 np0005533938 systemd[1]: libpod-conmon-d2c276ec93ddf45ac491bb801a5af11849fde3590791da5b1f37b8016ee8b600.scope: Deactivated successfully.
Nov 24 13:46:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:46:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:46:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:46:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:46:50 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 5bc0f3db-cdd1-4e18-9695-3b48440967db does not exist
Nov 24 13:46:50 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 3267ae93-6de4-4a5a-bae6-62003889b6bf does not exist
Nov 24 13:46:51 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:46:51 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:46:51 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v929: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:46:53 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v930: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:55 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v931: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:57 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v932: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:46:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:46:59 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v933: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:01 np0005533938 podman[275162]: 2025-11-24 18:47:01.048740595 +0000 UTC m=+0.128419387 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Nov 24 13:47:01 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v934: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:47:02 np0005533938 podman[275188]: 2025-11-24 18:47:02.978168952 +0000 UTC m=+0.065823324 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Nov 24 13:47:03 np0005533938 podman[275187]: 2025-11-24 18:47:03.003719678 +0000 UTC m=+0.087493255 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 13:47:03 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v935: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:47:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:47:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:47:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:47:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:47:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:47:05 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v936: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:07 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v937: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:47:08 np0005533938 nova_compute[270693]: 2025-11-24 18:47:08.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:47:08 np0005533938 nova_compute[270693]: 2025-11-24 18:47:08.563 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:47:08 np0005533938 nova_compute[270693]: 2025-11-24 18:47:08.564 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:47:08 np0005533938 nova_compute[270693]: 2025-11-24 18:47:08.564 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:47:08 np0005533938 nova_compute[270693]: 2025-11-24 18:47:08.564 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 13:47:08 np0005533938 nova_compute[270693]: 2025-11-24 18:47:08.564 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:47:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:47:08 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1631865295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:47:09 np0005533938 nova_compute[270693]: 2025-11-24 18:47:08.999 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:47:09 np0005533938 nova_compute[270693]: 2025-11-24 18:47:09.143 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 13:47:09 np0005533938 nova_compute[270693]: 2025-11-24 18:47:09.145 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5156MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 13:47:09 np0005533938 nova_compute[270693]: 2025-11-24 18:47:09.145 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:47:09 np0005533938 nova_compute[270693]: 2025-11-24 18:47:09.145 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:47:09 np0005533938 nova_compute[270693]: 2025-11-24 18:47:09.232 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 13:47:09 np0005533938 nova_compute[270693]: 2025-11-24 18:47:09.233 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 13:47:09 np0005533938 nova_compute[270693]: 2025-11-24 18:47:09.256 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:47:09 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v938: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:09 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:47:09 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2054809592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:47:09 np0005533938 nova_compute[270693]: 2025-11-24 18:47:09.653 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:47:09 np0005533938 nova_compute[270693]: 2025-11-24 18:47:09.658 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 13:47:09 np0005533938 nova_compute[270693]: 2025-11-24 18:47:09.681 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 13:47:09 np0005533938 nova_compute[270693]: 2025-11-24 18:47:09.683 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 13:47:09 np0005533938 nova_compute[270693]: 2025-11-24 18:47:09.684 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:47:10 np0005533938 nova_compute[270693]: 2025-11-24 18:47:10.684 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:47:10 np0005533938 nova_compute[270693]: 2025-11-24 18:47:10.685 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 13:47:10 np0005533938 nova_compute[270693]: 2025-11-24 18:47:10.685 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 13:47:10 np0005533938 nova_compute[270693]: 2025-11-24 18:47:10.703 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 13:47:10 np0005533938 nova_compute[270693]: 2025-11-24 18:47:10.704 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:47:10 np0005533938 nova_compute[270693]: 2025-11-24 18:47:10.704 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:47:10 np0005533938 nova_compute[270693]: 2025-11-24 18:47:10.704 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 13:47:11 np0005533938 nova_compute[270693]: 2025-11-24 18:47:11.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:47:11 np0005533938 nova_compute[270693]: 2025-11-24 18:47:11.557 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:47:11 np0005533938 nova_compute[270693]: 2025-11-24 18:47:11.557 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:47:11 np0005533938 nova_compute[270693]: 2025-11-24 18:47:11.558 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:47:11 np0005533938 nova_compute[270693]: 2025-11-24 18:47:11.558 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:47:11 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v939: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:12 np0005533938 nova_compute[270693]: 2025-11-24 18:47:12.553 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:47:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:47:13 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v940: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:15 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v941: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:17 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v942: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:47:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 13:47:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1443428077' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 13:47:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 13:47:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1443428077' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 13:47:19 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v943: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:21 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v944: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:47:22.741 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:47:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:47:22.741 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:47:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:47:22.742 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:47:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:47:23 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v945: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:25 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v946: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:27 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v947: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:47:29 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v948: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:31 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v949: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:32 np0005533938 podman[275270]: 2025-11-24 18:47:32.007865963 +0000 UTC m=+0.107070834 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 13:47:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:47:33 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v950: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:33 np0005533938 podman[275296]: 2025-11-24 18:47:33.997960064 +0000 UTC m=+0.084195693 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 13:47:34 np0005533938 podman[275297]: 2025-11-24 18:47:34.002884965 +0000 UTC m=+0.083267441 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 13:47:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:47:34
Nov 24 13:47:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:47:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:47:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['.rgw.root', 'vms', '.mgr', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta']
Nov 24 13:47:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:47:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:47:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:47:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:47:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:47:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:47:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:47:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:47:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:47:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:47:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:47:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:47:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:47:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:47:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:47:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:47:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:47:35 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v951: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:37 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v952: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:47:39 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v953: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:41 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v954: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:47:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:47:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:47:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:47:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:47:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:47:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:47:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:47:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:47:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:47:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:47:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:47:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:47:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:47:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:47:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:47:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:47:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:47:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:47:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:47:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:47:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:47:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:47:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:47:43 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v955: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:45 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v956: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:47 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v957: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:47:49 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v958: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:51 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v959: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:51 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:47:51 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:47:51 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:47:51 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:47:51 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:47:51 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:47:51 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 2bbf5b0e-dbe5-4aaa-a5be-0f45f3768da4 does not exist
Nov 24 13:47:51 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 39dc976b-c550-4157-8d56-cae56d63f086 does not exist
Nov 24 13:47:51 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 7ba17519-e429-452a-9ac2-2dc99223ca5d does not exist
Nov 24 13:47:51 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:47:51 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:47:51 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:47:51 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:47:51 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:47:51 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:47:51 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:47:51 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:47:51 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:47:52 np0005533938 podman[275610]: 2025-11-24 18:47:52.175466323 +0000 UTC m=+0.055581872 container create 11d3a1bd020e83d749587432ca795023e6780fe2e426042bc556af7339ffc706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nightingale, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:47:52 np0005533938 systemd[1]: Started libpod-conmon-11d3a1bd020e83d749587432ca795023e6780fe2e426042bc556af7339ffc706.scope.
Nov 24 13:47:52 np0005533938 podman[275610]: 2025-11-24 18:47:52.145699724 +0000 UTC m=+0.025815333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:47:52 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:47:52 np0005533938 podman[275610]: 2025-11-24 18:47:52.263859649 +0000 UTC m=+0.143975228 container init 11d3a1bd020e83d749587432ca795023e6780fe2e426042bc556af7339ffc706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nightingale, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:47:52 np0005533938 podman[275610]: 2025-11-24 18:47:52.2724628 +0000 UTC m=+0.152578339 container start 11d3a1bd020e83d749587432ca795023e6780fe2e426042bc556af7339ffc706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nightingale, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:47:52 np0005533938 podman[275610]: 2025-11-24 18:47:52.278460887 +0000 UTC m=+0.158576436 container attach 11d3a1bd020e83d749587432ca795023e6780fe2e426042bc556af7339ffc706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 13:47:52 np0005533938 festive_nightingale[275627]: 167 167
Nov 24 13:47:52 np0005533938 systemd[1]: libpod-11d3a1bd020e83d749587432ca795023e6780fe2e426042bc556af7339ffc706.scope: Deactivated successfully.
Nov 24 13:47:52 np0005533938 podman[275610]: 2025-11-24 18:47:52.280382464 +0000 UTC m=+0.160498033 container died 11d3a1bd020e83d749587432ca795023e6780fe2e426042bc556af7339ffc706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:47:52 np0005533938 systemd[1]: var-lib-containers-storage-overlay-5210a4946a693fd5d7537e4dfa2ec16e7b2bbfa30651cd4023f9421f8a08882c-merged.mount: Deactivated successfully.
Nov 24 13:47:52 np0005533938 podman[275610]: 2025-11-24 18:47:52.345703904 +0000 UTC m=+0.225819453 container remove 11d3a1bd020e83d749587432ca795023e6780fe2e426042bc556af7339ffc706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nightingale, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 13:47:52 np0005533938 systemd[1]: libpod-conmon-11d3a1bd020e83d749587432ca795023e6780fe2e426042bc556af7339ffc706.scope: Deactivated successfully.
Nov 24 13:47:52 np0005533938 podman[275653]: 2025-11-24 18:47:52.531399853 +0000 UTC m=+0.054735702 container create ca4a7bfda837fe6c2bcad798594c9af36744b0107137498ce4ea5609a7fc22c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lalande, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 24 13:47:52 np0005533938 systemd[1]: Started libpod-conmon-ca4a7bfda837fe6c2bcad798594c9af36744b0107137498ce4ea5609a7fc22c7.scope.
Nov 24 13:47:52 np0005533938 podman[275653]: 2025-11-24 18:47:52.505858767 +0000 UTC m=+0.029194696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:47:52 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:47:52 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3c5345c9b14e4d969500fa4a8904b3209718a183aacbafd18f5e5697990f48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:47:52 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3c5345c9b14e4d969500fa4a8904b3209718a183aacbafd18f5e5697990f48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:47:52 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3c5345c9b14e4d969500fa4a8904b3209718a183aacbafd18f5e5697990f48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:47:52 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3c5345c9b14e4d969500fa4a8904b3209718a183aacbafd18f5e5697990f48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:47:52 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3c5345c9b14e4d969500fa4a8904b3209718a183aacbafd18f5e5697990f48/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:47:52 np0005533938 podman[275653]: 2025-11-24 18:47:52.626639086 +0000 UTC m=+0.149974925 container init ca4a7bfda837fe6c2bcad798594c9af36744b0107137498ce4ea5609a7fc22c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lalande, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:47:52 np0005533938 podman[275653]: 2025-11-24 18:47:52.634526099 +0000 UTC m=+0.157861938 container start ca4a7bfda837fe6c2bcad798594c9af36744b0107137498ce4ea5609a7fc22c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lalande, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 24 13:47:52 np0005533938 podman[275653]: 2025-11-24 18:47:52.64067899 +0000 UTC m=+0.164014849 container attach ca4a7bfda837fe6c2bcad798594c9af36744b0107137498ce4ea5609a7fc22c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 13:47:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:47:53 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v960: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:53 np0005533938 sharp_lalande[275670]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:47:53 np0005533938 sharp_lalande[275670]: --> relative data size: 1.0
Nov 24 13:47:53 np0005533938 sharp_lalande[275670]: --> All data devices are unavailable
Nov 24 13:47:53 np0005533938 systemd[1]: libpod-ca4a7bfda837fe6c2bcad798594c9af36744b0107137498ce4ea5609a7fc22c7.scope: Deactivated successfully.
Nov 24 13:47:53 np0005533938 podman[275699]: 2025-11-24 18:47:53.695215884 +0000 UTC m=+0.022982744 container died ca4a7bfda837fe6c2bcad798594c9af36744b0107137498ce4ea5609a7fc22c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:47:53 np0005533938 systemd[1]: var-lib-containers-storage-overlay-0a3c5345c9b14e4d969500fa4a8904b3209718a183aacbafd18f5e5697990f48-merged.mount: Deactivated successfully.
Nov 24 13:47:53 np0005533938 podman[275699]: 2025-11-24 18:47:53.773521271 +0000 UTC m=+0.101288121 container remove ca4a7bfda837fe6c2bcad798594c9af36744b0107137498ce4ea5609a7fc22c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:47:53 np0005533938 systemd[1]: libpod-conmon-ca4a7bfda837fe6c2bcad798594c9af36744b0107137498ce4ea5609a7fc22c7.scope: Deactivated successfully.
Nov 24 13:47:54 np0005533938 podman[275854]: 2025-11-24 18:47:54.346781294 +0000 UTC m=+0.043296882 container create 9b96c6637a749fed46e4b873e4d6caec3138a5aa55e77b68ca2bc5c22c68b3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:47:54 np0005533938 systemd[1]: Started libpod-conmon-9b96c6637a749fed46e4b873e4d6caec3138a5aa55e77b68ca2bc5c22c68b3d7.scope.
Nov 24 13:47:54 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:47:54 np0005533938 podman[275854]: 2025-11-24 18:47:54.329024609 +0000 UTC m=+0.025540197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:47:54 np0005533938 podman[275854]: 2025-11-24 18:47:54.433304494 +0000 UTC m=+0.129820122 container init 9b96c6637a749fed46e4b873e4d6caec3138a5aa55e77b68ca2bc5c22c68b3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:47:54 np0005533938 podman[275854]: 2025-11-24 18:47:54.439055125 +0000 UTC m=+0.135570713 container start 9b96c6637a749fed46e4b873e4d6caec3138a5aa55e77b68ca2bc5c22c68b3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 13:47:54 np0005533938 charming_jackson[275870]: 167 167
Nov 24 13:47:54 np0005533938 systemd[1]: libpod-9b96c6637a749fed46e4b873e4d6caec3138a5aa55e77b68ca2bc5c22c68b3d7.scope: Deactivated successfully.
Nov 24 13:47:54 np0005533938 podman[275854]: 2025-11-24 18:47:54.446369134 +0000 UTC m=+0.142884752 container attach 9b96c6637a749fed46e4b873e4d6caec3138a5aa55e77b68ca2bc5c22c68b3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 13:47:54 np0005533938 podman[275854]: 2025-11-24 18:47:54.447361909 +0000 UTC m=+0.143877517 container died 9b96c6637a749fed46e4b873e4d6caec3138a5aa55e77b68ca2bc5c22c68b3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:47:54 np0005533938 systemd[1]: var-lib-containers-storage-overlay-18a68cd7205a37b0a8795a12bd922e1db515f270cb150dbdc81df080c9a58ca8-merged.mount: Deactivated successfully.
Nov 24 13:47:54 np0005533938 podman[275854]: 2025-11-24 18:47:54.493697514 +0000 UTC m=+0.190213082 container remove 9b96c6637a749fed46e4b873e4d6caec3138a5aa55e77b68ca2bc5c22c68b3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:47:54 np0005533938 systemd[1]: libpod-conmon-9b96c6637a749fed46e4b873e4d6caec3138a5aa55e77b68ca2bc5c22c68b3d7.scope: Deactivated successfully.
Nov 24 13:47:54 np0005533938 podman[275894]: 2025-11-24 18:47:54.655920097 +0000 UTC m=+0.045626019 container create afb9892bded1b569c755cf8319eb9f615007928d15183885d3c544875548da67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 13:47:54 np0005533938 systemd[1]: Started libpod-conmon-afb9892bded1b569c755cf8319eb9f615007928d15183885d3c544875548da67.scope.
Nov 24 13:47:54 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:47:54 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d989ed263637aba0c419ebf96249474ecc3453c5c254f0f02076e18ab1685d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:47:54 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d989ed263637aba0c419ebf96249474ecc3453c5c254f0f02076e18ab1685d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:47:54 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d989ed263637aba0c419ebf96249474ecc3453c5c254f0f02076e18ab1685d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:47:54 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d989ed263637aba0c419ebf96249474ecc3453c5c254f0f02076e18ab1685d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:47:54 np0005533938 podman[275894]: 2025-11-24 18:47:54.635353773 +0000 UTC m=+0.025059725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:47:54 np0005533938 podman[275894]: 2025-11-24 18:47:54.741491753 +0000 UTC m=+0.131197675 container init afb9892bded1b569c755cf8319eb9f615007928d15183885d3c544875548da67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:47:54 np0005533938 podman[275894]: 2025-11-24 18:47:54.748787842 +0000 UTC m=+0.138493744 container start afb9892bded1b569c755cf8319eb9f615007928d15183885d3c544875548da67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_panini, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 24 13:47:54 np0005533938 podman[275894]: 2025-11-24 18:47:54.754517122 +0000 UTC m=+0.144223034 container attach afb9892bded1b569c755cf8319eb9f615007928d15183885d3c544875548da67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:47:55 np0005533938 funny_panini[275910]: {
Nov 24 13:47:55 np0005533938 funny_panini[275910]:    "0": [
Nov 24 13:47:55 np0005533938 funny_panini[275910]:        {
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "devices": [
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "/dev/loop3"
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            ],
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "lv_name": "ceph_lv0",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "lv_size": "21470642176",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "name": "ceph_lv0",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "tags": {
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.cluster_name": "ceph",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.crush_device_class": "",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.encrypted": "0",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.osd_id": "0",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.type": "block",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.vdo": "0"
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            },
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "type": "block",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "vg_name": "ceph_vg0"
Nov 24 13:47:55 np0005533938 funny_panini[275910]:        }
Nov 24 13:47:55 np0005533938 funny_panini[275910]:    ],
Nov 24 13:47:55 np0005533938 funny_panini[275910]:    "1": [
Nov 24 13:47:55 np0005533938 funny_panini[275910]:        {
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "devices": [
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "/dev/loop4"
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            ],
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "lv_name": "ceph_lv1",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "lv_size": "21470642176",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "name": "ceph_lv1",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "tags": {
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.cluster_name": "ceph",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.crush_device_class": "",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.encrypted": "0",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.osd_id": "1",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.type": "block",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.vdo": "0"
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            },
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "type": "block",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "vg_name": "ceph_vg1"
Nov 24 13:47:55 np0005533938 funny_panini[275910]:        }
Nov 24 13:47:55 np0005533938 funny_panini[275910]:    ],
Nov 24 13:47:55 np0005533938 funny_panini[275910]:    "2": [
Nov 24 13:47:55 np0005533938 funny_panini[275910]:        {
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "devices": [
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "/dev/loop5"
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            ],
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "lv_name": "ceph_lv2",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "lv_size": "21470642176",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "name": "ceph_lv2",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "tags": {
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.cluster_name": "ceph",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.crush_device_class": "",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.encrypted": "0",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.osd_id": "2",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.type": "block",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:                "ceph.vdo": "0"
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            },
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "type": "block",
Nov 24 13:47:55 np0005533938 funny_panini[275910]:            "vg_name": "ceph_vg2"
Nov 24 13:47:55 np0005533938 funny_panini[275910]:        }
Nov 24 13:47:55 np0005533938 funny_panini[275910]:    ]
Nov 24 13:47:55 np0005533938 funny_panini[275910]: }
Nov 24 13:47:55 np0005533938 systemd[1]: libpod-afb9892bded1b569c755cf8319eb9f615007928d15183885d3c544875548da67.scope: Deactivated successfully.
Nov 24 13:47:55 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v961: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:55 np0005533938 podman[275919]: 2025-11-24 18:47:55.610990234 +0000 UTC m=+0.028670794 container died afb9892bded1b569c755cf8319eb9f615007928d15183885d3c544875548da67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 13:47:55 np0005533938 systemd[1]: var-lib-containers-storage-overlay-f3d989ed263637aba0c419ebf96249474ecc3453c5c254f0f02076e18ab1685d-merged.mount: Deactivated successfully.
Nov 24 13:47:55 np0005533938 podman[275919]: 2025-11-24 18:47:55.686636547 +0000 UTC m=+0.104317017 container remove afb9892bded1b569c755cf8319eb9f615007928d15183885d3c544875548da67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_panini, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:47:55 np0005533938 systemd[1]: libpod-conmon-afb9892bded1b569c755cf8319eb9f615007928d15183885d3c544875548da67.scope: Deactivated successfully.
Nov 24 13:47:56 np0005533938 podman[276075]: 2025-11-24 18:47:56.436121247 +0000 UTC m=+0.039642112 container create 561f0090314ebbb4573a959b3101dbb4dc24e7839fe23afc10ea7c1112aa4f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:47:56 np0005533938 systemd[1]: Started libpod-conmon-561f0090314ebbb4573a959b3101dbb4dc24e7839fe23afc10ea7c1112aa4f06.scope.
Nov 24 13:47:56 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:47:56 np0005533938 podman[276075]: 2025-11-24 18:47:56.415380489 +0000 UTC m=+0.018901384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:47:56 np0005533938 podman[276075]: 2025-11-24 18:47:56.517342307 +0000 UTC m=+0.120863172 container init 561f0090314ebbb4573a959b3101dbb4dc24e7839fe23afc10ea7c1112aa4f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_torvalds, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:47:56 np0005533938 podman[276075]: 2025-11-24 18:47:56.525904137 +0000 UTC m=+0.129425002 container start 561f0090314ebbb4573a959b3101dbb4dc24e7839fe23afc10ea7c1112aa4f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_torvalds, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 13:47:56 np0005533938 systemd[1]: libpod-561f0090314ebbb4573a959b3101dbb4dc24e7839fe23afc10ea7c1112aa4f06.scope: Deactivated successfully.
Nov 24 13:47:56 np0005533938 zen_torvalds[276091]: 167 167
Nov 24 13:47:56 np0005533938 conmon[276091]: conmon 561f0090314ebbb4573a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-561f0090314ebbb4573a959b3101dbb4dc24e7839fe23afc10ea7c1112aa4f06.scope/container/memory.events
Nov 24 13:47:56 np0005533938 podman[276075]: 2025-11-24 18:47:56.532205911 +0000 UTC m=+0.135726796 container attach 561f0090314ebbb4573a959b3101dbb4dc24e7839fe23afc10ea7c1112aa4f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:47:56 np0005533938 podman[276075]: 2025-11-24 18:47:56.534131928 +0000 UTC m=+0.137652793 container died 561f0090314ebbb4573a959b3101dbb4dc24e7839fe23afc10ea7c1112aa4f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 13:47:56 np0005533938 systemd[1]: var-lib-containers-storage-overlay-c1a12956c49a1b3e2c720124cddeec24412385fcf3ea95a793d2b4a93f981bad-merged.mount: Deactivated successfully.
Nov 24 13:47:56 np0005533938 podman[276075]: 2025-11-24 18:47:56.581053288 +0000 UTC m=+0.184574163 container remove 561f0090314ebbb4573a959b3101dbb4dc24e7839fe23afc10ea7c1112aa4f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:47:56 np0005533938 systemd[1]: libpod-conmon-561f0090314ebbb4573a959b3101dbb4dc24e7839fe23afc10ea7c1112aa4f06.scope: Deactivated successfully.
Nov 24 13:47:56 np0005533938 podman[276115]: 2025-11-24 18:47:56.778991046 +0000 UTC m=+0.050647151 container create 4319552c569141070f2f3d67ac699f5b1853602014ded94b8dcd5c8cdecdfd59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_villani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 13:47:56 np0005533938 systemd[1]: Started libpod-conmon-4319552c569141070f2f3d67ac699f5b1853602014ded94b8dcd5c8cdecdfd59.scope.
Nov 24 13:47:56 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:47:56 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc2145403a77db46ce83b4943ea24551903c29c0ee71e30671186a562d00524b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:47:56 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc2145403a77db46ce83b4943ea24551903c29c0ee71e30671186a562d00524b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:47:56 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc2145403a77db46ce83b4943ea24551903c29c0ee71e30671186a562d00524b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:47:56 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc2145403a77db46ce83b4943ea24551903c29c0ee71e30671186a562d00524b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:47:56 np0005533938 podman[276115]: 2025-11-24 18:47:56.759153511 +0000 UTC m=+0.030809606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:47:56 np0005533938 podman[276115]: 2025-11-24 18:47:56.871309838 +0000 UTC m=+0.142965923 container init 4319552c569141070f2f3d67ac699f5b1853602014ded94b8dcd5c8cdecdfd59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 24 13:47:56 np0005533938 podman[276115]: 2025-11-24 18:47:56.87753716 +0000 UTC m=+0.149193225 container start 4319552c569141070f2f3d67ac699f5b1853602014ded94b8dcd5c8cdecdfd59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_villani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 24 13:47:56 np0005533938 podman[276115]: 2025-11-24 18:47:56.882852011 +0000 UTC m=+0.154508096 container attach 4319552c569141070f2f3d67ac699f5b1853602014ded94b8dcd5c8cdecdfd59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_villani, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 13:47:57 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v962: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:47:57 np0005533938 distracted_villani[276131]: {
Nov 24 13:47:57 np0005533938 distracted_villani[276131]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:47:57 np0005533938 distracted_villani[276131]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:47:57 np0005533938 distracted_villani[276131]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:47:57 np0005533938 distracted_villani[276131]:        "osd_id": 0,
Nov 24 13:47:57 np0005533938 distracted_villani[276131]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:47:57 np0005533938 distracted_villani[276131]:        "type": "bluestore"
Nov 24 13:47:57 np0005533938 distracted_villani[276131]:    },
Nov 24 13:47:57 np0005533938 distracted_villani[276131]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:47:57 np0005533938 distracted_villani[276131]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:47:57 np0005533938 distracted_villani[276131]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:47:57 np0005533938 distracted_villani[276131]:        "osd_id": 1,
Nov 24 13:47:57 np0005533938 distracted_villani[276131]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:47:57 np0005533938 distracted_villani[276131]:        "type": "bluestore"
Nov 24 13:47:57 np0005533938 distracted_villani[276131]:    },
Nov 24 13:47:57 np0005533938 distracted_villani[276131]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:47:57 np0005533938 distracted_villani[276131]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:47:57 np0005533938 distracted_villani[276131]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:47:57 np0005533938 distracted_villani[276131]:        "osd_id": 2,
Nov 24 13:47:57 np0005533938 distracted_villani[276131]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:47:57 np0005533938 distracted_villani[276131]:        "type": "bluestore"
Nov 24 13:47:57 np0005533938 distracted_villani[276131]:    }
Nov 24 13:47:57 np0005533938 distracted_villani[276131]: }
Nov 24 13:47:57 np0005533938 systemd[1]: libpod-4319552c569141070f2f3d67ac699f5b1853602014ded94b8dcd5c8cdecdfd59.scope: Deactivated successfully.
Nov 24 13:47:57 np0005533938 podman[276115]: 2025-11-24 18:47:57.808343452 +0000 UTC m=+1.079999517 container died 4319552c569141070f2f3d67ac699f5b1853602014ded94b8dcd5c8cdecdfd59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_villani, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:47:57 np0005533938 systemd[1]: var-lib-containers-storage-overlay-dc2145403a77db46ce83b4943ea24551903c29c0ee71e30671186a562d00524b-merged.mount: Deactivated successfully.
Nov 24 13:47:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:47:57 np0005533938 podman[276115]: 2025-11-24 18:47:57.871380076 +0000 UTC m=+1.143036161 container remove 4319552c569141070f2f3d67ac699f5b1853602014ded94b8dcd5c8cdecdfd59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_villani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 13:47:57 np0005533938 systemd[1]: libpod-conmon-4319552c569141070f2f3d67ac699f5b1853602014ded94b8dcd5c8cdecdfd59.scope: Deactivated successfully.
Nov 24 13:47:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:47:57 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:47:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:47:57 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:47:57 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 7585dbc7-da0a-4abf-804e-4b119b42dfe0 does not exist
Nov 24 13:47:57 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 9368a8b3-de26-46e3-9ceb-6567aefb39f4 does not exist
Nov 24 13:47:58 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:47:58 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:47:59 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v963: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:48:01 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v964: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:48:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:48:03 np0005533938 podman[276229]: 2025-11-24 18:48:03.029362713 +0000 UTC m=+0.117005217 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 13:48:03 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v965: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:48:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:48:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:48:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:48:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:48:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:48:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:48:05 np0005533938 podman[276255]: 2025-11-24 18:48:05.0188276 +0000 UTC m=+0.100816041 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 24 13:48:05 np0005533938 podman[276256]: 2025-11-24 18:48:05.01925204 +0000 UTC m=+0.103591689 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Nov 24 13:48:05 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v966: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:48:07 np0005533938 nova_compute[270693]: 2025-11-24 18:48:07.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:48:07 np0005533938 nova_compute[270693]: 2025-11-24 18:48:07.529 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 24 13:48:07 np0005533938 nova_compute[270693]: 2025-11-24 18:48:07.548 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 24 13:48:07 np0005533938 nova_compute[270693]: 2025-11-24 18:48:07.548 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:48:07 np0005533938 nova_compute[270693]: 2025-11-24 18:48:07.548 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 24 13:48:07 np0005533938 nova_compute[270693]: 2025-11-24 18:48:07.561 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:48:07 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v967: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:48:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:48:09 np0005533938 nova_compute[270693]: 2025-11-24 18:48:09.572 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:48:09 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v968: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:48:09 np0005533938 nova_compute[270693]: 2025-11-24 18:48:09.603 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:48:09 np0005533938 nova_compute[270693]: 2025-11-24 18:48:09.604 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:48:09 np0005533938 nova_compute[270693]: 2025-11-24 18:48:09.604 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:48:09 np0005533938 nova_compute[270693]: 2025-11-24 18:48:09.604 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 13:48:09 np0005533938 nova_compute[270693]: 2025-11-24 18:48:09.604 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:48:10 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:48:10 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2938684667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:48:10 np0005533938 nova_compute[270693]: 2025-11-24 18:48:10.051 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:48:10 np0005533938 nova_compute[270693]: 2025-11-24 18:48:10.225 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 13:48:10 np0005533938 nova_compute[270693]: 2025-11-24 18:48:10.227 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5175MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 13:48:10 np0005533938 nova_compute[270693]: 2025-11-24 18:48:10.227 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:48:10 np0005533938 nova_compute[270693]: 2025-11-24 18:48:10.227 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:48:10 np0005533938 nova_compute[270693]: 2025-11-24 18:48:10.466 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 13:48:10 np0005533938 nova_compute[270693]: 2025-11-24 18:48:10.467 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 13:48:10 np0005533938 nova_compute[270693]: 2025-11-24 18:48:10.547 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:48:10 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:48:10 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3472366357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:48:10 np0005533938 nova_compute[270693]: 2025-11-24 18:48:10.976 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:48:10 np0005533938 nova_compute[270693]: 2025-11-24 18:48:10.981 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 13:48:11 np0005533938 nova_compute[270693]: 2025-11-24 18:48:11.001 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 13:48:11 np0005533938 nova_compute[270693]: 2025-11-24 18:48:11.002 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 13:48:11 np0005533938 nova_compute[270693]: 2025-11-24 18:48:11.002 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:48:11 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v969: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:48:11 np0005533938 nova_compute[270693]: 2025-11-24 18:48:11.960 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:48:11 np0005533938 nova_compute[270693]: 2025-11-24 18:48:11.961 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 13:48:11 np0005533938 nova_compute[270693]: 2025-11-24 18:48:11.961 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 13:48:11 np0005533938 nova_compute[270693]: 2025-11-24 18:48:11.977 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 13:48:11 np0005533938 nova_compute[270693]: 2025-11-24 18:48:11.978 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:48:11 np0005533938 nova_compute[270693]: 2025-11-24 18:48:11.978 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:48:11 np0005533938 nova_compute[270693]: 2025-11-24 18:48:11.978 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 13:48:12 np0005533938 nova_compute[270693]: 2025-11-24 18:48:12.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:48:12 np0005533938 nova_compute[270693]: 2025-11-24 18:48:12.530 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:48:12 np0005533938 nova_compute[270693]: 2025-11-24 18:48:12.530 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:48:12 np0005533938 nova_compute[270693]: 2025-11-24 18:48:12.530 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:48:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:48:13 np0005533938 nova_compute[270693]: 2025-11-24 18:48:13.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:48:13 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v970: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:48:14 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Nov 24 13:48:14 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Nov 24 13:48:14 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Nov 24 13:48:15 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Nov 24 13:48:15 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Nov 24 13:48:15 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 24 13:48:15 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v973: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 127 B/s wr, 0 op/s
Nov 24 13:48:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Nov 24 13:48:17 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v974: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 127 B/s wr, 0 op/s
Nov 24 13:48:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Nov 24 13:48:17 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Nov 24 13:48:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:48:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 13:48:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/822022556' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 13:48:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 13:48:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/822022556' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 13:48:19 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v976: 321 pgs: 321 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 3.4 MiB/s wr, 9 op/s
Nov 24 13:48:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Nov 24 13:48:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Nov 24 13:48:19 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Nov 24 13:48:21 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v978: 321 pgs: 321 active+clean; 37 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 5.9 MiB/s wr, 31 op/s
Nov 24 13:48:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:48:22.742 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:48:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:48:22.743 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:48:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:48:22.743 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:48:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:48:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Nov 24 13:48:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Nov 24 13:48:22 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Nov 24 13:48:23 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v980: 321 pgs: 321 active+clean; 37 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 6.1 MiB/s wr, 62 op/s
Nov 24 13:48:25 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v981: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.2 MiB/s wr, 48 op/s
Nov 24 13:48:27 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v982: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 2.6 MiB/s wr, 40 op/s
Nov 24 13:48:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:48:29 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v983: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Nov 24 13:48:31 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v984: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 456 KiB/s wr, 18 op/s
Nov 24 13:48:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:48:33 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v985: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 425 KiB/s wr, 17 op/s
Nov 24 13:48:34 np0005533938 podman[276338]: 2025-11-24 18:48:34.028975691 +0000 UTC m=+0.113656805 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 24 13:48:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:48:34
Nov 24 13:48:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:48:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:48:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'default.rgw.control', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'vms', 'backups']
Nov 24 13:48:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:48:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:48:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:48:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:48:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:48:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:48:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:48:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:48:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:48:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:48:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:48:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:48:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:48:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:48:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:48:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:48:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:48:35 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v986: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 379 KiB/s wr, 0 op/s
Nov 24 13:48:35 np0005533938 podman[276366]: 2025-11-24 18:48:35.957076755 +0000 UTC m=+0.048045698 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 13:48:36 np0005533938 podman[276365]: 2025-11-24 18:48:36.003833311 +0000 UTC m=+0.088366726 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 24 13:48:37 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:48:37.594 179763 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:2b:64', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fa:26:5b:32:fa:ba'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 13:48:37 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:48:37.595 179763 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 24 13:48:37 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v987: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:48:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:48:39 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v988: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:48:41 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v989: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:48:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:48:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:48:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:48:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:48:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:48:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:48:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:48:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:48:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:48:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:48:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:48:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 24 13:48:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:48:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:48:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:48:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:48:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:48:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:48:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:48:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:48:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:48:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:48:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:48:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:48:43 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:48:43.596 179763 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=302e9f34-0427-4ff9-a29b-2fc7b5250666, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 13:48:43 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v990: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.093322) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010124093423, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1453, "num_deletes": 251, "total_data_size": 2265062, "memory_usage": 2311480, "flush_reason": "Manual Compaction"}
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010124103562, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2232086, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19506, "largest_seqno": 20958, "table_properties": {"data_size": 2225267, "index_size": 3954, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14135, "raw_average_key_size": 19, "raw_value_size": 2211508, "raw_average_value_size": 3114, "num_data_blocks": 180, "num_entries": 710, "num_filter_entries": 710, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764009977, "oldest_key_time": 1764009977, "file_creation_time": 1764010124, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 10263 microseconds, and 4780 cpu microseconds.
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.103597) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2232086 bytes OK
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.103612) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.105113) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.105127) EVENT_LOG_v1 {"time_micros": 1764010124105123, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.105144) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2258660, prev total WAL file size 2258660, number of live WAL files 2.
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.105887) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2179KB)], [47(6878KB)]
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010124105935, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9275881, "oldest_snapshot_seqno": -1}
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4323 keys, 7503754 bytes, temperature: kUnknown
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010124138321, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7503754, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7473937, "index_size": 17931, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10821, "raw_key_size": 106987, "raw_average_key_size": 24, "raw_value_size": 7394663, "raw_average_value_size": 1710, "num_data_blocks": 752, "num_entries": 4323, "num_filter_entries": 4323, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764010124, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.138540) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7503754 bytes
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.140013) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 285.7 rd, 231.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 6.7 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(7.5) write-amplify(3.4) OK, records in: 4841, records dropped: 518 output_compression: NoCompression
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.140056) EVENT_LOG_v1 {"time_micros": 1764010124140039, "job": 24, "event": "compaction_finished", "compaction_time_micros": 32467, "compaction_time_cpu_micros": 15064, "output_level": 6, "num_output_files": 1, "total_output_size": 7503754, "num_input_records": 4841, "num_output_records": 4323, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010124140597, "job": 24, "event": "table_file_deletion", "file_number": 49}
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010124141854, "job": 24, "event": "table_file_deletion", "file_number": 47}
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.105801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.141942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.141948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.141950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.141952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:48:44 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:48:44.141955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:48:45 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v991: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:48:47 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:48:47 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4636 writes, 20K keys, 4636 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4636 writes, 4636 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1317 writes, 5973 keys, 1317 commit groups, 1.0 writes per commit group, ingest: 8.63 MB, 0.01 MB/s#012Interval WAL: 1317 writes, 1317 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     87.5      0.28              0.07        12    0.023       0      0       0.0       0.0#012  L6      1/0    7.16 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    203.6    166.8      0.46              0.21        11    0.042     48K   5780       0.0       0.0#012 Sum      1/0    7.16 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    127.3    137.0      0.73              0.28        23    0.032     48K   5780       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.0    149.5    151.4      0.30              0.13        10    0.030     23K   2583       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    203.6    166.8      0.46              0.21        11    0.042     48K   5780       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     87.9      0.27              0.07        11    0.025       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     28.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.024, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 0.7 seconds#012Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562af0cfd1f0#2 capacity: 308.00 MB usage: 8.61 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000109 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(559,8.22 MB,2.66877%) FilterBlock(24,142.36 KB,0.0451373%) IndexBlock(24,261.91 KB,0.0830415%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 24 13:48:47 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v992: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:48:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:48:49 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v993: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:48:51 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v994: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:48:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Nov 24 13:48:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Nov 24 13:48:52 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Nov 24 13:48:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:48:53 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v996: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 818 B/s wr, 3 op/s
Nov 24 13:48:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Nov 24 13:48:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Nov 24 13:48:54 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Nov 24 13:48:55 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v998: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 3.0 KiB/s wr, 43 op/s
Nov 24 13:48:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Nov 24 13:48:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Nov 24 13:48:57 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Nov 24 13:48:57 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1000: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 4.0 KiB/s wr, 57 op/s
Nov 24 13:48:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:48:58 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:48:58 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:48:58 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:48:58 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:48:58 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:48:58 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:48:58 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 67fa1872-d787-46e5-9f24-0508d6ab481a does not exist
Nov 24 13:48:58 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev fda44082-1974-42bb-b95b-448633b57f7b does not exist
Nov 24 13:48:58 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 7a6b1194-d726-4849-b24d-27274a80cbd0 does not exist
Nov 24 13:48:58 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:48:58 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:48:58 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:48:58 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:48:58 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:48:58 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:48:59 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:48:59 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:48:59 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:48:59 np0005533938 podman[276676]: 2025-11-24 18:48:59.580498696 +0000 UTC m=+0.042431710 container create 96c5bf8411d32e72e56e1ba1676dcb7800e9fe95885d41ba0afcef641e3d7c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bassi, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 13:48:59 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1001: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 4.5 KiB/s wr, 54 op/s
Nov 24 13:48:59 np0005533938 systemd[1]: Started libpod-conmon-96c5bf8411d32e72e56e1ba1676dcb7800e9fe95885d41ba0afcef641e3d7c38.scope.
Nov 24 13:48:59 np0005533938 podman[276676]: 2025-11-24 18:48:59.563213713 +0000 UTC m=+0.025146707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:48:59 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:48:59 np0005533938 podman[276676]: 2025-11-24 18:48:59.676268633 +0000 UTC m=+0.138201617 container init 96c5bf8411d32e72e56e1ba1676dcb7800e9fe95885d41ba0afcef641e3d7c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bassi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:48:59 np0005533938 podman[276676]: 2025-11-24 18:48:59.683389887 +0000 UTC m=+0.145322871 container start 96c5bf8411d32e72e56e1ba1676dcb7800e9fe95885d41ba0afcef641e3d7c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bassi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:48:59 np0005533938 podman[276676]: 2025-11-24 18:48:59.686399361 +0000 UTC m=+0.148332355 container attach 96c5bf8411d32e72e56e1ba1676dcb7800e9fe95885d41ba0afcef641e3d7c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 24 13:48:59 np0005533938 reverent_bassi[276692]: 167 167
Nov 24 13:48:59 np0005533938 systemd[1]: libpod-96c5bf8411d32e72e56e1ba1676dcb7800e9fe95885d41ba0afcef641e3d7c38.scope: Deactivated successfully.
Nov 24 13:48:59 np0005533938 conmon[276692]: conmon 96c5bf8411d32e72e56e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-96c5bf8411d32e72e56e1ba1676dcb7800e9fe95885d41ba0afcef641e3d7c38.scope/container/memory.events
Nov 24 13:48:59 np0005533938 podman[276676]: 2025-11-24 18:48:59.693366212 +0000 UTC m=+0.155299226 container died 96c5bf8411d32e72e56e1ba1676dcb7800e9fe95885d41ba0afcef641e3d7c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:48:59 np0005533938 systemd[1]: var-lib-containers-storage-overlay-8779eaffac1f44e69a0399fc7c11bcb3ec844e443d322f526a6810a7bb0ae1f4-merged.mount: Deactivated successfully.
Nov 24 13:48:59 np0005533938 podman[276676]: 2025-11-24 18:48:59.745288133 +0000 UTC m=+0.207221107 container remove 96c5bf8411d32e72e56e1ba1676dcb7800e9fe95885d41ba0afcef641e3d7c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bassi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 13:48:59 np0005533938 systemd[1]: libpod-conmon-96c5bf8411d32e72e56e1ba1676dcb7800e9fe95885d41ba0afcef641e3d7c38.scope: Deactivated successfully.
Nov 24 13:49:00 np0005533938 podman[276716]: 2025-11-24 18:49:00.008344898 +0000 UTC m=+0.057296155 container create 3f804521be453525acfd19eb8182916e8020068a4a1c099f8571d4babbb1b235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moser, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:49:00 np0005533938 systemd[1]: Started libpod-conmon-3f804521be453525acfd19eb8182916e8020068a4a1c099f8571d4babbb1b235.scope.
Nov 24 13:49:00 np0005533938 podman[276716]: 2025-11-24 18:48:59.973271748 +0000 UTC m=+0.022223035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:49:00 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:49:00 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b96162fea127591f44a7e1eebf12ef7eb3f97e5518e77c1b7e4a8fb5593bc7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:49:00 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b96162fea127591f44a7e1eebf12ef7eb3f97e5518e77c1b7e4a8fb5593bc7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:49:00 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b96162fea127591f44a7e1eebf12ef7eb3f97e5518e77c1b7e4a8fb5593bc7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:49:00 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b96162fea127591f44a7e1eebf12ef7eb3f97e5518e77c1b7e4a8fb5593bc7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:49:00 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b96162fea127591f44a7e1eebf12ef7eb3f97e5518e77c1b7e4a8fb5593bc7f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:49:00 np0005533938 podman[276716]: 2025-11-24 18:49:00.099585413 +0000 UTC m=+0.148536700 container init 3f804521be453525acfd19eb8182916e8020068a4a1c099f8571d4babbb1b235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moser, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:49:00 np0005533938 podman[276716]: 2025-11-24 18:49:00.110959161 +0000 UTC m=+0.159910448 container start 3f804521be453525acfd19eb8182916e8020068a4a1c099f8571d4babbb1b235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moser, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 24 13:49:00 np0005533938 podman[276716]: 2025-11-24 18:49:00.115491332 +0000 UTC m=+0.164442689 container attach 3f804521be453525acfd19eb8182916e8020068a4a1c099f8571d4babbb1b235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 24 13:49:01 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Nov 24 13:49:01 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Nov 24 13:49:01 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Nov 24 13:49:01 np0005533938 hungry_moser[276733]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:49:01 np0005533938 hungry_moser[276733]: --> relative data size: 1.0
Nov 24 13:49:01 np0005533938 hungry_moser[276733]: --> All data devices are unavailable
Nov 24 13:49:01 np0005533938 systemd[1]: libpod-3f804521be453525acfd19eb8182916e8020068a4a1c099f8571d4babbb1b235.scope: Deactivated successfully.
Nov 24 13:49:01 np0005533938 systemd[1]: libpod-3f804521be453525acfd19eb8182916e8020068a4a1c099f8571d4babbb1b235.scope: Consumed 1.104s CPU time.
Nov 24 13:49:01 np0005533938 podman[276762]: 2025-11-24 18:49:01.319828156 +0000 UTC m=+0.033061301 container died 3f804521be453525acfd19eb8182916e8020068a4a1c099f8571d4babbb1b235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:49:01 np0005533938 systemd[1]: var-lib-containers-storage-overlay-2b96162fea127591f44a7e1eebf12ef7eb3f97e5518e77c1b7e4a8fb5593bc7f-merged.mount: Deactivated successfully.
Nov 24 13:49:01 np0005533938 podman[276762]: 2025-11-24 18:49:01.380085072 +0000 UTC m=+0.093318227 container remove 3f804521be453525acfd19eb8182916e8020068a4a1c099f8571d4babbb1b235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moser, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 13:49:01 np0005533938 systemd[1]: libpod-conmon-3f804521be453525acfd19eb8182916e8020068a4a1c099f8571d4babbb1b235.scope: Deactivated successfully.
Nov 24 13:49:01 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1003: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 5.4 KiB/s wr, 69 op/s
Nov 24 13:49:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Nov 24 13:49:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Nov 24 13:49:02 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Nov 24 13:49:02 np0005533938 podman[276921]: 2025-11-24 18:49:02.239757861 +0000 UTC m=+0.069211257 container create 889077d08510933ca34576b5b0c5675688826824d974f979b87aa6cdda54dfc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lamarr, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:49:02 np0005533938 systemd[1]: Started libpod-conmon-889077d08510933ca34576b5b0c5675688826824d974f979b87aa6cdda54dfc3.scope.
Nov 24 13:49:02 np0005533938 podman[276921]: 2025-11-24 18:49:02.211395726 +0000 UTC m=+0.040849112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:49:02 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:49:02 np0005533938 podman[276921]: 2025-11-24 18:49:02.342102608 +0000 UTC m=+0.171556014 container init 889077d08510933ca34576b5b0c5675688826824d974f979b87aa6cdda54dfc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 24 13:49:02 np0005533938 podman[276921]: 2025-11-24 18:49:02.353295492 +0000 UTC m=+0.182748868 container start 889077d08510933ca34576b5b0c5675688826824d974f979b87aa6cdda54dfc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 13:49:02 np0005533938 podman[276921]: 2025-11-24 18:49:02.356443629 +0000 UTC m=+0.185897015 container attach 889077d08510933ca34576b5b0c5675688826824d974f979b87aa6cdda54dfc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 13:49:02 np0005533938 beautiful_lamarr[276937]: 167 167
Nov 24 13:49:02 np0005533938 systemd[1]: libpod-889077d08510933ca34576b5b0c5675688826824d974f979b87aa6cdda54dfc3.scope: Deactivated successfully.
Nov 24 13:49:02 np0005533938 podman[276921]: 2025-11-24 18:49:02.363923443 +0000 UTC m=+0.193376839 container died 889077d08510933ca34576b5b0c5675688826824d974f979b87aa6cdda54dfc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lamarr, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 24 13:49:02 np0005533938 systemd[1]: var-lib-containers-storage-overlay-413e631c339f8a43afa88cf0f4f2518b8d1792fa8c3487b84514b235516e88f0-merged.mount: Deactivated successfully.
Nov 24 13:49:02 np0005533938 podman[276921]: 2025-11-24 18:49:02.404167228 +0000 UTC m=+0.233620584 container remove 889077d08510933ca34576b5b0c5675688826824d974f979b87aa6cdda54dfc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lamarr, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 24 13:49:02 np0005533938 systemd[1]: libpod-conmon-889077d08510933ca34576b5b0c5675688826824d974f979b87aa6cdda54dfc3.scope: Deactivated successfully.
Nov 24 13:49:02 np0005533938 podman[276962]: 2025-11-24 18:49:02.61526858 +0000 UTC m=+0.042676787 container create dedeb5966677fb9f2b8cc6ea03353cabb960259abac86b3a9e8a9cd0db567b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galileo, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:49:02 np0005533938 systemd[1]: Started libpod-conmon-dedeb5966677fb9f2b8cc6ea03353cabb960259abac86b3a9e8a9cd0db567b9e.scope.
Nov 24 13:49:02 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:49:02 np0005533938 podman[276962]: 2025-11-24 18:49:02.597678769 +0000 UTC m=+0.025086986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:49:02 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdddecb749b6298db63963f3e71c9a1fe6646f0b8546bf9b81b1d983f90d3f28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:49:02 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdddecb749b6298db63963f3e71c9a1fe6646f0b8546bf9b81b1d983f90d3f28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:49:02 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdddecb749b6298db63963f3e71c9a1fe6646f0b8546bf9b81b1d983f90d3f28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:49:02 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdddecb749b6298db63963f3e71c9a1fe6646f0b8546bf9b81b1d983f90d3f28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:49:02 np0005533938 podman[276962]: 2025-11-24 18:49:02.708695439 +0000 UTC m=+0.136103696 container init dedeb5966677fb9f2b8cc6ea03353cabb960259abac86b3a9e8a9cd0db567b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galileo, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 24 13:49:02 np0005533938 podman[276962]: 2025-11-24 18:49:02.722281341 +0000 UTC m=+0.149689548 container start dedeb5966677fb9f2b8cc6ea03353cabb960259abac86b3a9e8a9cd0db567b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galileo, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:49:02 np0005533938 podman[276962]: 2025-11-24 18:49:02.725858469 +0000 UTC m=+0.153266676 container attach dedeb5966677fb9f2b8cc6ea03353cabb960259abac86b3a9e8a9cd0db567b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:49:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:49:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Nov 24 13:49:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Nov 24 13:49:02 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]: {
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:    "0": [
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:        {
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "devices": [
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "/dev/loop3"
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            ],
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "lv_name": "ceph_lv0",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "lv_size": "21470642176",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "name": "ceph_lv0",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "tags": {
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.cluster_name": "ceph",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.crush_device_class": "",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.encrypted": "0",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.osd_id": "0",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.type": "block",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.vdo": "0"
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            },
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "type": "block",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "vg_name": "ceph_vg0"
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:        }
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:    ],
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:    "1": [
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:        {
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "devices": [
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "/dev/loop4"
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            ],
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "lv_name": "ceph_lv1",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "lv_size": "21470642176",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "name": "ceph_lv1",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "tags": {
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.cluster_name": "ceph",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.crush_device_class": "",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.encrypted": "0",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.osd_id": "1",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.type": "block",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.vdo": "0"
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            },
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "type": "block",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "vg_name": "ceph_vg1"
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:        }
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:    ],
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:    "2": [
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:        {
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "devices": [
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "/dev/loop5"
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            ],
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "lv_name": "ceph_lv2",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "lv_size": "21470642176",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "name": "ceph_lv2",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "tags": {
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.cluster_name": "ceph",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.crush_device_class": "",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.encrypted": "0",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.osd_id": "2",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.type": "block",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:                "ceph.vdo": "0"
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            },
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "type": "block",
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:            "vg_name": "ceph_vg2"
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:        }
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]:    ]
Nov 24 13:49:03 np0005533938 fervent_galileo[276979]: }
Nov 24 13:49:03 np0005533938 systemd[1]: libpod-dedeb5966677fb9f2b8cc6ea03353cabb960259abac86b3a9e8a9cd0db567b9e.scope: Deactivated successfully.
Nov 24 13:49:03 np0005533938 podman[276988]: 2025-11-24 18:49:03.553291219 +0000 UTC m=+0.026827968 container died dedeb5966677fb9f2b8cc6ea03353cabb960259abac86b3a9e8a9cd0db567b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galileo, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 24 13:49:03 np0005533938 systemd[1]: var-lib-containers-storage-overlay-fdddecb749b6298db63963f3e71c9a1fe6646f0b8546bf9b81b1d983f90d3f28-merged.mount: Deactivated successfully.
Nov 24 13:49:03 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1006: 321 pgs: 321 active+clean; 89 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 8.0 MiB/s wr, 171 op/s
Nov 24 13:49:03 np0005533938 podman[276988]: 2025-11-24 18:49:03.613257958 +0000 UTC m=+0.086794637 container remove dedeb5966677fb9f2b8cc6ea03353cabb960259abac86b3a9e8a9cd0db567b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 13:49:03 np0005533938 systemd[1]: libpod-conmon-dedeb5966677fb9f2b8cc6ea03353cabb960259abac86b3a9e8a9cd0db567b9e.scope: Deactivated successfully.
Nov 24 13:49:04 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Nov 24 13:49:04 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Nov 24 13:49:04 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Nov 24 13:49:04 np0005533938 podman[277143]: 2025-11-24 18:49:04.291396851 +0000 UTC m=+0.040552674 container create a7ad43334d7e8713b5ac6ca9bad942530ac735cd97b9f68f72bba8cf20471501 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:49:04 np0005533938 systemd[1]: Started libpod-conmon-a7ad43334d7e8713b5ac6ca9bad942530ac735cd97b9f68f72bba8cf20471501.scope.
Nov 24 13:49:04 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:49:04 np0005533938 podman[277143]: 2025-11-24 18:49:04.365577738 +0000 UTC m=+0.114733631 container init a7ad43334d7e8713b5ac6ca9bad942530ac735cd97b9f68f72bba8cf20471501 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:49:04 np0005533938 podman[277143]: 2025-11-24 18:49:04.272665102 +0000 UTC m=+0.021820945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:49:04 np0005533938 podman[277143]: 2025-11-24 18:49:04.372747324 +0000 UTC m=+0.121903137 container start a7ad43334d7e8713b5ac6ca9bad942530ac735cd97b9f68f72bba8cf20471501 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_spence, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 13:49:04 np0005533938 podman[277143]: 2025-11-24 18:49:04.375887601 +0000 UTC m=+0.125043444 container attach a7ad43334d7e8713b5ac6ca9bad942530ac735cd97b9f68f72bba8cf20471501 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 24 13:49:04 np0005533938 strange_spence[277161]: 167 167
Nov 24 13:49:04 np0005533938 systemd[1]: libpod-a7ad43334d7e8713b5ac6ca9bad942530ac735cd97b9f68f72bba8cf20471501.scope: Deactivated successfully.
Nov 24 13:49:04 np0005533938 podman[277143]: 2025-11-24 18:49:04.379686274 +0000 UTC m=+0.128842117 container died a7ad43334d7e8713b5ac6ca9bad942530ac735cd97b9f68f72bba8cf20471501 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_spence, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 24 13:49:04 np0005533938 systemd[1]: var-lib-containers-storage-overlay-e367748fc87e54430004e6558c54402f0d2c3731ed1f3a731677d7abfebca40d-merged.mount: Deactivated successfully.
Nov 24 13:49:04 np0005533938 podman[277143]: 2025-11-24 18:49:04.436140827 +0000 UTC m=+0.185296650 container remove a7ad43334d7e8713b5ac6ca9bad942530ac735cd97b9f68f72bba8cf20471501 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_spence, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 24 13:49:04 np0005533938 systemd[1]: libpod-conmon-a7ad43334d7e8713b5ac6ca9bad942530ac735cd97b9f68f72bba8cf20471501.scope: Deactivated successfully.
Nov 24 13:49:04 np0005533938 podman[277157]: 2025-11-24 18:49:04.506284285 +0000 UTC m=+0.175831418 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 13:49:04 np0005533938 podman[277212]: 2025-11-24 18:49:04.661211381 +0000 UTC m=+0.046646224 container create de9c6ba0a480d34e9d6971b871d7fdeb7b7e600935e55b55bac18a4354de04ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 24 13:49:04 np0005533938 systemd[1]: Started libpod-conmon-de9c6ba0a480d34e9d6971b871d7fdeb7b7e600935e55b55bac18a4354de04ef.scope.
Nov 24 13:49:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:49:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:49:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:49:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:49:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:49:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:49:04 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:49:04 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be3dbcd603e72aa56958d19ecafca17d3f25b045938aff7243e71d1a1e3534e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:49:04 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be3dbcd603e72aa56958d19ecafca17d3f25b045938aff7243e71d1a1e3534e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:49:04 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be3dbcd603e72aa56958d19ecafca17d3f25b045938aff7243e71d1a1e3534e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:49:04 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be3dbcd603e72aa56958d19ecafca17d3f25b045938aff7243e71d1a1e3534e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:49:04 np0005533938 podman[277212]: 2025-11-24 18:49:04.646199493 +0000 UTC m=+0.031634356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:49:04 np0005533938 podman[277212]: 2025-11-24 18:49:04.744229204 +0000 UTC m=+0.129664057 container init de9c6ba0a480d34e9d6971b871d7fdeb7b7e600935e55b55bac18a4354de04ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:49:04 np0005533938 podman[277212]: 2025-11-24 18:49:04.752076206 +0000 UTC m=+0.137511049 container start de9c6ba0a480d34e9d6971b871d7fdeb7b7e600935e55b55bac18a4354de04ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 13:49:04 np0005533938 podman[277212]: 2025-11-24 18:49:04.754749582 +0000 UTC m=+0.140184425 container attach de9c6ba0a480d34e9d6971b871d7fdeb7b7e600935e55b55bac18a4354de04ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:49:05 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Nov 24 13:49:05 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Nov 24 13:49:05 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Nov 24 13:49:05 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1009: 321 pgs: 321 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 216 KiB/s rd, 28 MiB/s wr, 310 op/s
Nov 24 13:49:05 np0005533938 dazzling_nightingale[277228]: {
Nov 24 13:49:05 np0005533938 dazzling_nightingale[277228]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:49:05 np0005533938 dazzling_nightingale[277228]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:49:05 np0005533938 dazzling_nightingale[277228]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:49:05 np0005533938 dazzling_nightingale[277228]:        "osd_id": 0,
Nov 24 13:49:05 np0005533938 dazzling_nightingale[277228]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:49:05 np0005533938 dazzling_nightingale[277228]:        "type": "bluestore"
Nov 24 13:49:05 np0005533938 dazzling_nightingale[277228]:    },
Nov 24 13:49:05 np0005533938 dazzling_nightingale[277228]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:49:05 np0005533938 dazzling_nightingale[277228]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:49:05 np0005533938 dazzling_nightingale[277228]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:49:05 np0005533938 dazzling_nightingale[277228]:        "osd_id": 1,
Nov 24 13:49:05 np0005533938 dazzling_nightingale[277228]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:49:05 np0005533938 dazzling_nightingale[277228]:        "type": "bluestore"
Nov 24 13:49:05 np0005533938 dazzling_nightingale[277228]:    },
Nov 24 13:49:05 np0005533938 dazzling_nightingale[277228]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:49:05 np0005533938 dazzling_nightingale[277228]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:49:05 np0005533938 dazzling_nightingale[277228]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:49:05 np0005533938 dazzling_nightingale[277228]:        "osd_id": 2,
Nov 24 13:49:05 np0005533938 dazzling_nightingale[277228]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:49:05 np0005533938 dazzling_nightingale[277228]:        "type": "bluestore"
Nov 24 13:49:05 np0005533938 dazzling_nightingale[277228]:    }
Nov 24 13:49:05 np0005533938 dazzling_nightingale[277228]: }
Nov 24 13:49:05 np0005533938 systemd[1]: libpod-de9c6ba0a480d34e9d6971b871d7fdeb7b7e600935e55b55bac18a4354de04ef.scope: Deactivated successfully.
Nov 24 13:49:05 np0005533938 systemd[1]: libpod-de9c6ba0a480d34e9d6971b871d7fdeb7b7e600935e55b55bac18a4354de04ef.scope: Consumed 1.016s CPU time.
Nov 24 13:49:05 np0005533938 podman[277261]: 2025-11-24 18:49:05.800641963 +0000 UTC m=+0.021485648 container died de9c6ba0a480d34e9d6971b871d7fdeb7b7e600935e55b55bac18a4354de04ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 13:49:05 np0005533938 systemd[1]: var-lib-containers-storage-overlay-be3dbcd603e72aa56958d19ecafca17d3f25b045938aff7243e71d1a1e3534e0-merged.mount: Deactivated successfully.
Nov 24 13:49:05 np0005533938 podman[277261]: 2025-11-24 18:49:05.848351572 +0000 UTC m=+0.069195257 container remove de9c6ba0a480d34e9d6971b871d7fdeb7b7e600935e55b55bac18a4354de04ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_nightingale, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 13:49:05 np0005533938 systemd[1]: libpod-conmon-de9c6ba0a480d34e9d6971b871d7fdeb7b7e600935e55b55bac18a4354de04ef.scope: Deactivated successfully.
Nov 24 13:49:05 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:49:05 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:49:05 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:49:05 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:49:05 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 37dd614d-b987-4132-a9e9-7436386b3ded does not exist
Nov 24 13:49:05 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 06b80360-7a1b-4b6e-95eb-422c433e9e84 does not exist
Nov 24 13:49:06 np0005533938 podman[277301]: 2025-11-24 18:49:06.051884298 +0000 UTC m=+0.058323260 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 13:49:06 np0005533938 podman[277346]: 2025-11-24 18:49:06.135687341 +0000 UTC m=+0.056087175 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 13:49:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Nov 24 13:49:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Nov 24 13:49:06 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Nov 24 13:49:06 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:49:06 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:49:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Nov 24 13:49:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Nov 24 13:49:07 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 24 13:49:07 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 16 MiB/s wr, 107 op/s
Nov 24 13:49:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:49:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Nov 24 13:49:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Nov 24 13:49:08 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Nov 24 13:49:09 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1014: 321 pgs: 321 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 15 KiB/s wr, 191 op/s
Nov 24 13:49:10 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Nov 24 13:49:10 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Nov 24 13:49:10 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Nov 24 13:49:10 np0005533938 nova_compute[270693]: 2025-11-24 18:49:10.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:49:10 np0005533938 nova_compute[270693]: 2025-11-24 18:49:10.563 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:49:10 np0005533938 nova_compute[270693]: 2025-11-24 18:49:10.564 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:49:10 np0005533938 nova_compute[270693]: 2025-11-24 18:49:10.564 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:49:10 np0005533938 nova_compute[270693]: 2025-11-24 18:49:10.564 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 13:49:10 np0005533938 nova_compute[270693]: 2025-11-24 18:49:10.565 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:49:11 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:49:11 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2972841125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:49:11 np0005533938 nova_compute[270693]: 2025-11-24 18:49:11.037 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:49:11 np0005533938 nova_compute[270693]: 2025-11-24 18:49:11.211 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 13:49:11 np0005533938 nova_compute[270693]: 2025-11-24 18:49:11.212 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5139MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 13:49:11 np0005533938 nova_compute[270693]: 2025-11-24 18:49:11.212 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:49:11 np0005533938 nova_compute[270693]: 2025-11-24 18:49:11.213 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:49:11 np0005533938 nova_compute[270693]: 2025-11-24 18:49:11.299 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 13:49:11 np0005533938 nova_compute[270693]: 2025-11-24 18:49:11.299 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 13:49:11 np0005533938 nova_compute[270693]: 2025-11-24 18:49:11.479 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Refreshing inventories for resource provider d1cce7ec-de83-4810-91f8-1852891da8a6 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 24 13:49:11 np0005533938 nova_compute[270693]: 2025-11-24 18:49:11.502 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Updating ProviderTree inventory for provider d1cce7ec-de83-4810-91f8-1852891da8a6 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 24 13:49:11 np0005533938 nova_compute[270693]: 2025-11-24 18:49:11.502 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Updating inventory in ProviderTree for provider d1cce7ec-de83-4810-91f8-1852891da8a6 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 24 13:49:11 np0005533938 nova_compute[270693]: 2025-11-24 18:49:11.527 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Refreshing aggregate associations for resource provider d1cce7ec-de83-4810-91f8-1852891da8a6, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 24 13:49:11 np0005533938 nova_compute[270693]: 2025-11-24 18:49:11.553 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Refreshing trait associations for resource provider d1cce7ec-de83-4810-91f8-1852891da8a6, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_FMA3,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE4A,HW_CPU_X86_F16C,HW_CPU_X86_SSE2,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_E1000 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 24 13:49:11 np0005533938 nova_compute[270693]: 2025-11-24 18:49:11.575 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:49:11 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1016: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 195 KiB/s rd, 19 KiB/s wr, 272 op/s
Nov 24 13:49:11 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:49:11 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/697120505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:49:12 np0005533938 nova_compute[270693]: 2025-11-24 18:49:12.007 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:49:12 np0005533938 nova_compute[270693]: 2025-11-24 18:49:12.012 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 13:49:12 np0005533938 nova_compute[270693]: 2025-11-24 18:49:12.026 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 13:49:12 np0005533938 nova_compute[270693]: 2025-11-24 18:49:12.027 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 13:49:12 np0005533938 nova_compute[270693]: 2025-11-24 18:49:12.028 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:49:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Nov 24 13:49:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Nov 24 13:49:12 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Nov 24 13:49:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 24 13:49:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Nov 24 13:49:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Nov 24 13:49:12 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 24 13:49:13 np0005533938 nova_compute[270693]: 2025-11-24 18:49:13.023 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:49:13 np0005533938 nova_compute[270693]: 2025-11-24 18:49:13.024 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:49:13 np0005533938 nova_compute[270693]: 2025-11-24 18:49:13.044 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:49:13 np0005533938 nova_compute[270693]: 2025-11-24 18:49:13.044 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 13:49:13 np0005533938 nova_compute[270693]: 2025-11-24 18:49:13.044 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 13:49:13 np0005533938 nova_compute[270693]: 2025-11-24 18:49:13.058 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 13:49:13 np0005533938 nova_compute[270693]: 2025-11-24 18:49:13.059 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:49:13 np0005533938 nova_compute[270693]: 2025-11-24 18:49:13.059 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:49:13 np0005533938 nova_compute[270693]: 2025-11-24 18:49:13.059 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 13:49:13 np0005533938 nova_compute[270693]: 2025-11-24 18:49:13.530 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:49:13 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1019: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 248 KiB/s rd, 24 KiB/s wr, 347 op/s
Nov 24 13:49:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Nov 24 13:49:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Nov 24 13:49:13 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Nov 24 13:49:14 np0005533938 nova_compute[270693]: 2025-11-24 18:49:14.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:49:14 np0005533938 nova_compute[270693]: 2025-11-24 18:49:14.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:49:14 np0005533938 nova_compute[270693]: 2025-11-24 18:49:14.530 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:49:15 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Nov 24 13:49:15 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Nov 24 13:49:15 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Nov 24 13:49:15 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1022: 321 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 154 KiB/s rd, 12 KiB/s wr, 209 op/s
Nov 24 13:49:16 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Nov 24 13:49:16 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Nov 24 13:49:16 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Nov 24 13:49:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Nov 24 13:49:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Nov 24 13:49:17 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Nov 24 13:49:17 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1025: 321 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 314 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 6.0 KiB/s wr, 111 op/s
Nov 24 13:49:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:49:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Nov 24 13:49:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Nov 24 13:49:17 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Nov 24 13:49:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 13:49:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2729009600' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 13:49:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 13:49:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2729009600' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 13:49:19 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1027: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 132 KiB/s rd, 11 KiB/s wr, 181 op/s
Nov 24 13:49:21 np0005533938 nova_compute[270693]: 2025-11-24 18:49:21.484 270697 DEBUG oslo_concurrency.lockutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "81f5edb9-2756-4a6e-bc3a-fa770161d562" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:49:21 np0005533938 nova_compute[270693]: 2025-11-24 18:49:21.485 270697 DEBUG oslo_concurrency.lockutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "81f5edb9-2756-4a6e-bc3a-fa770161d562" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:49:21 np0005533938 nova_compute[270693]: 2025-11-24 18:49:21.511 270697 DEBUG nova.compute.manager [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 24 13:49:21 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1028: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 11 KiB/s wr, 175 op/s
Nov 24 13:49:21 np0005533938 nova_compute[270693]: 2025-11-24 18:49:21.654 270697 DEBUG oslo_concurrency.lockutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:49:21 np0005533938 nova_compute[270693]: 2025-11-24 18:49:21.655 270697 DEBUG oslo_concurrency.lockutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:49:21 np0005533938 nova_compute[270693]: 2025-11-24 18:49:21.666 270697 DEBUG nova.virt.hardware [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 24 13:49:21 np0005533938 nova_compute[270693]: 2025-11-24 18:49:21.667 270697 INFO nova.compute.claims [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 24 13:49:21 np0005533938 nova_compute[270693]: 2025-11-24 18:49:21.790 270697 DEBUG oslo_concurrency.processutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:49:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:49:22 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2586813287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:49:22 np0005533938 nova_compute[270693]: 2025-11-24 18:49:22.224 270697 DEBUG oslo_concurrency.processutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:49:22 np0005533938 nova_compute[270693]: 2025-11-24 18:49:22.231 270697 DEBUG nova.compute.provider_tree [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 13:49:22 np0005533938 nova_compute[270693]: 2025-11-24 18:49:22.256 270697 DEBUG nova.scheduler.client.report [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 13:49:22 np0005533938 nova_compute[270693]: 2025-11-24 18:49:22.319 270697 DEBUG oslo_concurrency.lockutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:49:22 np0005533938 nova_compute[270693]: 2025-11-24 18:49:22.320 270697 DEBUG nova.compute.manager [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 24 13:49:22 np0005533938 nova_compute[270693]: 2025-11-24 18:49:22.393 270697 DEBUG nova.compute.manager [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 24 13:49:22 np0005533938 nova_compute[270693]: 2025-11-24 18:49:22.394 270697 DEBUG nova.network.neutron [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 24 13:49:22 np0005533938 nova_compute[270693]: 2025-11-24 18:49:22.434 270697 INFO nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 24 13:49:22 np0005533938 nova_compute[270693]: 2025-11-24 18:49:22.456 270697 DEBUG nova.compute.manager [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 24 13:49:22 np0005533938 nova_compute[270693]: 2025-11-24 18:49:22.498 270697 INFO nova.virt.block_device [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Booting with volume 57bd14c1-40c4-42ca-854f-95f89e621d53 at /dev/vda#033[00m
Nov 24 13:49:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:49:22.743 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:49:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:49:22.744 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:49:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:49:22.744 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:49:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:49:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Nov 24 13:49:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Nov 24 13:49:22 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.035 270697 DEBUG os_brick.utils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.036 270697 INFO oslo.privsep.daemon [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmp0nuwri68/privsep.sock']#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.048 270697 DEBUG nova.network.neutron [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.048 270697 DEBUG nova.compute.manager [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 24 13:49:23 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1030: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 10 KiB/s wr, 168 op/s
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.659 270697 INFO oslo.privsep.daemon [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.557 277439 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.561 277439 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.562 277439 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.563 277439 INFO oslo.privsep.daemon [-] privsep daemon running as pid 277439#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.662 277439 DEBUG oslo.privsep.daemon [-] privsep: reply[4f5c0fc3-59d2-426d-b66d-e0f9b833d8c0]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.748 277439 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.760 277439 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.760 277439 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe559ea-e46a-4768-b017-f437c9ff23b7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.761 277439 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.770 277439 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.770 277439 DEBUG oslo.privsep.daemon [-] privsep: reply[9a60aba2-4c75-40e2-a71e-d22c29c499ff]: (4, ('InitiatorName=iqn.1994-05.com.redhat:cf95ee7bc55e', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.772 277439 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.782 277439 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.782 277439 DEBUG oslo.privsep.daemon [-] privsep: reply[b96185d1-919d-4bc9-b1ac-845dd50c2484]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.784 277439 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1cd8df-838c-48de-8e4b-4ca1095071e5]: (4, 'ce8f254e-4b98-4140-abc7-8040b35476ad') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.784 270697 DEBUG oslo_concurrency.processutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.802 270697 DEBUG oslo_concurrency.processutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CMD "nvme version" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.805 270697 DEBUG os_brick.initiator.connectors.lightos [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.806 270697 DEBUG os_brick.initiator.connectors.lightos [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.806 270697 DEBUG os_brick.initiator.connectors.lightos [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.807 270697 DEBUG os_brick.utils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] <== get_connector_properties: return (771ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:cf95ee7bc55e', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'ce8f254e-4b98-4140-abc7-8040b35476ad', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 24 13:49:23 np0005533938 nova_compute[270693]: 2025-11-24 18:49:23.808 270697 DEBUG nova.virt.block_device [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Updating existing volume attachment record: cab52bb9-c5f2-4664-afde-1d2efa7d2dd3 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 24 13:49:24 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 24 13:49:24 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4007807367' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.187 270697 DEBUG nova.compute.manager [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.188 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.189 270697 INFO nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Creating image(s)#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.189 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.190 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Ensure instance console log exists: /var/lib/nova/instances/81f5edb9-2756-4a6e-bc3a-fa770161d562/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.190 270697 DEBUG oslo_concurrency.lockutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.190 270697 DEBUG oslo_concurrency.lockutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.190 270697 DEBUG oslo_concurrency.lockutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.192 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-57bd14c1-40c4-42ca-854f-95f89e621d53', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '57bd14c1-40c4-42ca-854f-95f89e621d53', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '81f5edb9-2756-4a6e-bc3a-fa770161d562', 'attached_at': '', 'detached_at': '', 'volume_id': '57bd14c1-40c4-42ca-854f-95f89e621d53', 'serial': '57bd14c1-40c4-42ca-854f-95f89e621d53'}, 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'boot_index': 0, 'guest_format': None, 'device_type': 'disk', 'attachment_id': 'cab52bb9-c5f2-4664-afde-1d2efa7d2dd3', 'delete_on_termination': True, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.196 270697 WARNING nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.200 270697 DEBUG nova.virt.libvirt.host [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.201 270697 DEBUG nova.virt.libvirt.host [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.204 270697 DEBUG nova.virt.libvirt.host [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.204 270697 DEBUG nova.virt.libvirt.host [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.204 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.205 270697 DEBUG nova.virt.hardware [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T18:48:11Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fa20e92f-7c52-40ac-838f-32e378b8ec04',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.205 270697 DEBUG nova.virt.hardware [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.205 270697 DEBUG nova.virt.hardware [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.205 270697 DEBUG nova.virt.hardware [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.206 270697 DEBUG nova.virt.hardware [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.206 270697 DEBUG nova.virt.hardware [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.206 270697 DEBUG nova.virt.hardware [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.206 270697 DEBUG nova.virt.hardware [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.207 270697 DEBUG nova.virt.hardware [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.207 270697 DEBUG nova.virt.hardware [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.207 270697 DEBUG nova.virt.hardware [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.228 270697 DEBUG nova.storage.rbd_utils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] rbd image 81f5edb9-2756-4a6e-bc3a-fa770161d562_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.232 270697 DEBUG nova.privsep.utils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.233 270697 DEBUG oslo_concurrency.processutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:49:25 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 24 13:49:25 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1317962750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 13:49:25 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1031: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 8.0 KiB/s wr, 131 op/s
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.627 270697 DEBUG oslo_concurrency.processutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.629 270697 DEBUG oslo_concurrency.lockutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.629 270697 DEBUG oslo_concurrency.lockutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.631 270697 DEBUG oslo_concurrency.lockutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:49:25 np0005533938 systemd[1]: Starting libvirt secret daemon...
Nov 24 13:49:25 np0005533938 systemd[1]: Started libvirt secret daemon.
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.695 270697 DEBUG nova.objects.instance [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lazy-loading 'pci_devices' on Instance uuid 81f5edb9-2756-4a6e-bc3a-fa770161d562 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.711 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] End _get_guest_xml xml=<domain type="kvm">
Nov 24 13:49:25 np0005533938 nova_compute[270693]:  <uuid>81f5edb9-2756-4a6e-bc3a-fa770161d562</uuid>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:  <name>instance-00000001</name>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:  <memory>131072</memory>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:  <vcpu>1</vcpu>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:  <metadata>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <nova:name>instance-depend-image</nova:name>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <nova:creationTime>2025-11-24 18:49:25</nova:creationTime>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <nova:flavor name="m1.nano">
Nov 24 13:49:25 np0005533938 nova_compute[270693]:        <nova:memory>128</nova:memory>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:        <nova:disk>1</nova:disk>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:        <nova:swap>0</nova:swap>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:        <nova:ephemeral>0</nova:ephemeral>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:        <nova:vcpus>1</nova:vcpus>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      </nova:flavor>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <nova:owner>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:        <nova:user uuid="c5033dc71ef0458982cc0f8121662150">tempest-ImageDependencyTests-981399736-project-member</nova:user>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:        <nova:project uuid="0d692fe6fe5e446c86fe7152afbbaa17">tempest-ImageDependencyTests-981399736</nova:project>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      </nova:owner>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <nova:ports/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    </nova:instance>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:  </metadata>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:  <sysinfo type="smbios">
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <system>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <entry name="manufacturer">RDO</entry>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <entry name="product">OpenStack Compute</entry>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <entry name="serial">81f5edb9-2756-4a6e-bc3a-fa770161d562</entry>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <entry name="uuid">81f5edb9-2756-4a6e-bc3a-fa770161d562</entry>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <entry name="family">Virtual Machine</entry>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    </system>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:  </sysinfo>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:  <os>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <boot dev="hd"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <smbios mode="sysinfo"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:  </os>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:  <features>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <acpi/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <apic/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <vmcoreinfo/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:  </features>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:  <clock offset="utc">
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <timer name="pit" tickpolicy="delay"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <timer name="hpet" present="no"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:  </clock>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:  <cpu mode="host-model" match="exact">
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <topology sockets="1" cores="1" threads="1"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:  </cpu>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:  <devices>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <disk type="network" device="cdrom">
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <driver type="raw" cache="none"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <source protocol="rbd" name="vms/81f5edb9-2756-4a6e-bc3a-fa770161d562_disk.config">
Nov 24 13:49:25 np0005533938 nova_compute[270693]:        <host name="192.168.122.100" port="6789"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      </source>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <auth username="openstack">
Nov 24 13:49:25 np0005533938 nova_compute[270693]:        <secret type="ceph" uuid="e5ee928f-099b-569b-93c9-ecf025cbb50d"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      </auth>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <target dev="sda" bus="sata"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    </disk>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <disk type="network" device="disk">
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <source protocol="rbd" name="volumes/volume-57bd14c1-40c4-42ca-854f-95f89e621d53">
Nov 24 13:49:25 np0005533938 nova_compute[270693]:        <host name="192.168.122.100" port="6789"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      </source>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <auth username="openstack">
Nov 24 13:49:25 np0005533938 nova_compute[270693]:        <secret type="ceph" uuid="e5ee928f-099b-569b-93c9-ecf025cbb50d"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      </auth>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <target dev="vda" bus="virtio"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <serial>57bd14c1-40c4-42ca-854f-95f89e621d53</serial>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    </disk>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <serial type="pty">
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <log file="/var/lib/nova/instances/81f5edb9-2756-4a6e-bc3a-fa770161d562/console.log" append="off"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    </serial>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <video>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <model type="virtio"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    </video>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <input type="tablet" bus="usb"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <rng model="virtio">
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <backend model="random">/dev/urandom</backend>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    </rng>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <controller type="usb" index="0"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    <memballoon model="virtio">
Nov 24 13:49:25 np0005533938 nova_compute[270693]:      <stats period="10"/>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:    </memballoon>
Nov 24 13:49:25 np0005533938 nova_compute[270693]:  </devices>
Nov 24 13:49:25 np0005533938 nova_compute[270693]: </domain>
Nov 24 13:49:25 np0005533938 nova_compute[270693]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.765 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.765 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.766 270697 INFO nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Using config drive#033[00m
Nov 24 13:49:25 np0005533938 nova_compute[270693]: 2025-11-24 18:49:25.791 270697 DEBUG nova.storage.rbd_utils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] rbd image 81f5edb9-2756-4a6e-bc3a-fa770161d562_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 13:49:26 np0005533938 nova_compute[270693]: 2025-11-24 18:49:26.290 270697 INFO nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Creating config drive at /var/lib/nova/instances/81f5edb9-2756-4a6e-bc3a-fa770161d562/disk.config#033[00m
Nov 24 13:49:26 np0005533938 nova_compute[270693]: 2025-11-24 18:49:26.299 270697 DEBUG oslo_concurrency.processutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/81f5edb9-2756-4a6e-bc3a-fa770161d562/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoeicw7cu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:49:26 np0005533938 nova_compute[270693]: 2025-11-24 18:49:26.443 270697 DEBUG oslo_concurrency.processutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/81f5edb9-2756-4a6e-bc3a-fa770161d562/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoeicw7cu" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:49:26 np0005533938 nova_compute[270693]: 2025-11-24 18:49:26.467 270697 DEBUG nova.storage.rbd_utils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] rbd image 81f5edb9-2756-4a6e-bc3a-fa770161d562_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 13:49:26 np0005533938 nova_compute[270693]: 2025-11-24 18:49:26.470 270697 DEBUG oslo_concurrency.processutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/81f5edb9-2756-4a6e-bc3a-fa770161d562/disk.config 81f5edb9-2756-4a6e-bc3a-fa770161d562_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:49:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Nov 24 13:49:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Nov 24 13:49:27 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Nov 24 13:49:27 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1033: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.9 KiB/s wr, 34 op/s
Nov 24 13:49:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:49:28 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Nov 24 13:49:28 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Nov 24 13:49:28 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Nov 24 13:49:28 np0005533938 nova_compute[270693]: 2025-11-24 18:49:28.566 270697 DEBUG oslo_concurrency.processutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/81f5edb9-2756-4a6e-bc3a-fa770161d562/disk.config 81f5edb9-2756-4a6e-bc3a-fa770161d562_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:49:28 np0005533938 nova_compute[270693]: 2025-11-24 18:49:28.567 270697 INFO nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Deleting local config drive /var/lib/nova/instances/81f5edb9-2756-4a6e-bc3a-fa770161d562/disk.config because it was imported into RBD.#033[00m
Nov 24 13:49:28 np0005533938 systemd-machined[232503]: New machine qemu-1-instance-00000001.
Nov 24 13:49:28 np0005533938 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Nov 24 13:49:29 np0005533938 nova_compute[270693]: 2025-11-24 18:49:29.266 270697 DEBUG nova.virt.driver [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Emitting event <LifecycleEvent: 1764010169.26571, 81f5edb9-2756-4a6e-bc3a-fa770161d562 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 13:49:29 np0005533938 nova_compute[270693]: 2025-11-24 18:49:29.268 270697 INFO nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] VM Resumed (Lifecycle Event)#033[00m
Nov 24 13:49:29 np0005533938 nova_compute[270693]: 2025-11-24 18:49:29.277 270697 DEBUG nova.compute.manager [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 24 13:49:29 np0005533938 nova_compute[270693]: 2025-11-24 18:49:29.278 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 24 13:49:29 np0005533938 nova_compute[270693]: 2025-11-24 18:49:29.281 270697 INFO nova.virt.libvirt.driver [-] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Instance spawned successfully.#033[00m
Nov 24 13:49:29 np0005533938 nova_compute[270693]: 2025-11-24 18:49:29.282 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 24 13:49:29 np0005533938 nova_compute[270693]: 2025-11-24 18:49:29.330 270697 DEBUG nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 13:49:29 np0005533938 nova_compute[270693]: 2025-11-24 18:49:29.340 270697 DEBUG nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 24 13:49:29 np0005533938 nova_compute[270693]: 2025-11-24 18:49:29.345 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 13:49:29 np0005533938 nova_compute[270693]: 2025-11-24 18:49:29.346 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 13:49:29 np0005533938 nova_compute[270693]: 2025-11-24 18:49:29.347 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 13:49:29 np0005533938 nova_compute[270693]: 2025-11-24 18:49:29.348 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 13:49:29 np0005533938 nova_compute[270693]: 2025-11-24 18:49:29.349 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 13:49:29 np0005533938 nova_compute[270693]: 2025-11-24 18:49:29.350 270697 DEBUG nova.virt.libvirt.driver [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 13:49:29 np0005533938 nova_compute[270693]: 2025-11-24 18:49:29.363 270697 INFO nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 24 13:49:29 np0005533938 nova_compute[270693]: 2025-11-24 18:49:29.364 270697 DEBUG nova.virt.driver [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Emitting event <LifecycleEvent: 1764010169.276756, 81f5edb9-2756-4a6e-bc3a-fa770161d562 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 13:49:29 np0005533938 nova_compute[270693]: 2025-11-24 18:49:29.365 270697 INFO nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] VM Started (Lifecycle Event)#033[00m
Nov 24 13:49:29 np0005533938 nova_compute[270693]: 2025-11-24 18:49:29.430 270697 DEBUG nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 13:49:29 np0005533938 nova_compute[270693]: 2025-11-24 18:49:29.433 270697 DEBUG nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 24 13:49:29 np0005533938 nova_compute[270693]: 2025-11-24 18:49:29.466 270697 INFO nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 24 13:49:29 np0005533938 nova_compute[270693]: 2025-11-24 18:49:29.478 270697 INFO nova.compute.manager [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Took 4.29 seconds to spawn the instance on the hypervisor.#033[00m
Nov 24 13:49:29 np0005533938 nova_compute[270693]: 2025-11-24 18:49:29.479 270697 DEBUG nova.compute.manager [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 13:49:29 np0005533938 nova_compute[270693]: 2025-11-24 18:49:29.550 270697 INFO nova.compute.manager [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Took 7.94 seconds to build instance.#033[00m
Nov 24 13:49:29 np0005533938 nova_compute[270693]: 2025-11-24 18:49:29.569 270697 DEBUG oslo_concurrency.lockutils [None req-5af70344-95b9-48d5-b19b-4614068cfa11 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "81f5edb9-2756-4a6e-bc3a-fa770161d562" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:49:29 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1035: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 24 13:49:31 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1036: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 19 KiB/s wr, 11 op/s
Nov 24 13:49:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Nov 24 13:49:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Nov 24 13:49:32 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Nov 24 13:49:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:49:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Nov 24 13:49:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Nov 24 13:49:32 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Nov 24 13:49:33 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1039: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 27 KiB/s wr, 77 op/s
Nov 24 13:49:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Nov 24 13:49:34 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Nov 24 13:49:34 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Nov 24 13:49:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:49:34
Nov 24 13:49:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:49:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:49:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'vms', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'backups', 'cephfs.cephfs.data']
Nov 24 13:49:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:49:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:49:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:49:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:49:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:49:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:49:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:49:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:49:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:49:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:49:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:49:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:49:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:49:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:49:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:49:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:49:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:49:35 np0005533938 podman[277622]: 2025-11-24 18:49:35.005062082 +0000 UTC m=+0.086354046 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 13:49:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Nov 24 13:49:35 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Nov 24 13:49:35 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 24 13:49:35 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1042: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 4.5 KiB/s wr, 130 op/s
Nov 24 13:49:36 np0005533938 podman[277648]: 2025-11-24 18:49:36.964545384 +0000 UTC m=+0.053941442 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 13:49:36 np0005533938 podman[277649]: 2025-11-24 18:49:36.973003052 +0000 UTC m=+0.057795987 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 24 13:49:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Nov 24 13:49:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Nov 24 13:49:37 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Nov 24 13:49:37 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1044: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 3.8 KiB/s wr, 110 op/s
Nov 24 13:49:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:49:39 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1045: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 4.5 KiB/s wr, 61 op/s
Nov 24 13:49:41 np0005533938 nova_compute[270693]: 2025-11-24 18:49:41.534 270697 DEBUG oslo_concurrency.lockutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "3226af13-afcf-47ff-91b3-2ccec9def10d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:49:41 np0005533938 nova_compute[270693]: 2025-11-24 18:49:41.535 270697 DEBUG oslo_concurrency.lockutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "3226af13-afcf-47ff-91b3-2ccec9def10d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:49:41 np0005533938 nova_compute[270693]: 2025-11-24 18:49:41.554 270697 DEBUG nova.compute.manager [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 24 13:49:41 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1046: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 5.0 KiB/s wr, 93 op/s
Nov 24 13:49:41 np0005533938 nova_compute[270693]: 2025-11-24 18:49:41.642 270697 DEBUG oslo_concurrency.lockutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:49:41 np0005533938 nova_compute[270693]: 2025-11-24 18:49:41.642 270697 DEBUG oslo_concurrency.lockutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:49:41 np0005533938 nova_compute[270693]: 2025-11-24 18:49:41.649 270697 DEBUG nova.virt.hardware [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 24 13:49:41 np0005533938 nova_compute[270693]: 2025-11-24 18:49:41.649 270697 INFO nova.compute.claims [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 24 13:49:41 np0005533938 nova_compute[270693]: 2025-11-24 18:49:41.793 270697 DEBUG oslo_concurrency.processutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:49:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:49:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/498490164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:49:42 np0005533938 nova_compute[270693]: 2025-11-24 18:49:42.244 270697 DEBUG oslo_concurrency.processutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:49:42 np0005533938 nova_compute[270693]: 2025-11-24 18:49:42.252 270697 DEBUG nova.compute.provider_tree [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 13:49:42 np0005533938 nova_compute[270693]: 2025-11-24 18:49:42.277 270697 DEBUG nova.scheduler.client.report [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 13:49:42 np0005533938 nova_compute[270693]: 2025-11-24 18:49:42.302 270697 DEBUG oslo_concurrency.lockutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:49:42 np0005533938 nova_compute[270693]: 2025-11-24 18:49:42.303 270697 DEBUG nova.compute.manager [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 24 13:49:42 np0005533938 nova_compute[270693]: 2025-11-24 18:49:42.353 270697 DEBUG nova.compute.manager [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 24 13:49:42 np0005533938 nova_compute[270693]: 2025-11-24 18:49:42.354 270697 DEBUG nova.network.neutron [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 24 13:49:42 np0005533938 nova_compute[270693]: 2025-11-24 18:49:42.375 270697 INFO nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 24 13:49:42 np0005533938 nova_compute[270693]: 2025-11-24 18:49:42.392 270697 DEBUG nova.compute.manager [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 24 13:49:42 np0005533938 nova_compute[270693]: 2025-11-24 18:49:42.480 270697 DEBUG nova.compute.manager [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 24 13:49:42 np0005533938 nova_compute[270693]: 2025-11-24 18:49:42.481 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 24 13:49:42 np0005533938 nova_compute[270693]: 2025-11-24 18:49:42.481 270697 INFO nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Creating image(s)#033[00m
Nov 24 13:49:42 np0005533938 nova_compute[270693]: 2025-11-24 18:49:42.507 270697 DEBUG nova.storage.rbd_utils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] rbd image 3226af13-afcf-47ff-91b3-2ccec9def10d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 13:49:42 np0005533938 nova_compute[270693]: 2025-11-24 18:49:42.536 270697 DEBUG nova.storage.rbd_utils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] rbd image 3226af13-afcf-47ff-91b3-2ccec9def10d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 13:49:42 np0005533938 nova_compute[270693]: 2025-11-24 18:49:42.555 270697 DEBUG nova.storage.rbd_utils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] rbd image 3226af13-afcf-47ff-91b3-2ccec9def10d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 13:49:42 np0005533938 nova_compute[270693]: 2025-11-24 18:49:42.557 270697 DEBUG oslo_concurrency.lockutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "729e6718c1087801824b83fd3da972f8762743ad" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:49:42 np0005533938 nova_compute[270693]: 2025-11-24 18:49:42.558 270697 DEBUG oslo_concurrency.lockutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "729e6718c1087801824b83fd3da972f8762743ad" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:49:42 np0005533938 nova_compute[270693]: 2025-11-24 18:49:42.788 270697 DEBUG nova.virt.libvirt.imagebackend [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Image locations are: [{'url': 'rbd://e5ee928f-099b-569b-93c9-ecf025cbb50d/images/e08f0b9d-adb5-48f3-899f-503d3912e516/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e5ee928f-099b-569b-93c9-ecf025cbb50d/images/e08f0b9d-adb5-48f3-899f-503d3912e516/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 24 13:49:42 np0005533938 nova_compute[270693]: 2025-11-24 18:49:42.832 270697 DEBUG nova.virt.libvirt.imagebackend [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Selected location: {'url': 'rbd://e5ee928f-099b-569b-93c9-ecf025cbb50d/images/e08f0b9d-adb5-48f3-899f-503d3912e516/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 24 13:49:42 np0005533938 nova_compute[270693]: 2025-11-24 18:49:42.833 270697 DEBUG nova.storage.rbd_utils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] cloning images/e08f0b9d-adb5-48f3-899f-503d3912e516@snap to None/3226af13-afcf-47ff-91b3-2ccec9def10d_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 24 13:49:42 np0005533938 nova_compute[270693]: 2025-11-24 18:49:42.875 270697 DEBUG nova.network.neutron [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 24 13:49:42 np0005533938 nova_compute[270693]: 2025-11-24 18:49:42.875 270697 DEBUG nova.compute.manager [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 24 13:49:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:49:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Nov 24 13:49:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Nov 24 13:49:42 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Nov 24 13:49:42 np0005533938 nova_compute[270693]: 2025-11-24 18:49:42.973 270697 DEBUG oslo_concurrency.lockutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "729e6718c1087801824b83fd3da972f8762743ad" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.139 270697 DEBUG nova.storage.rbd_utils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] resizing rbd image 3226af13-afcf-47ff-91b3-2ccec9def10d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.210 270697 DEBUG nova.objects.instance [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lazy-loading 'migration_context' on Instance uuid 3226af13-afcf-47ff-91b3-2ccec9def10d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.229 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.229 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Ensure instance console log exists: /var/lib/nova/instances/3226af13-afcf-47ff-91b3-2ccec9def10d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.230 270697 DEBUG oslo_concurrency.lockutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.230 270697 DEBUG oslo_concurrency.lockutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.231 270697 DEBUG oslo_concurrency.lockutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.232 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b313313bc424bc1da2fd32d986e790f4',container_format='bare',created_at=2025-11-24T18:49:36Z,direct_url=<?>,disk_format='raw',id=e08f0b9d-adb5-48f3-899f-503d3912e516,min_disk=0,min_ram=0,name='tempest-image-dependency-test-290862993',owner='0d692fe6fe5e446c86fe7152afbbaa17',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-11-24T18:49:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'boot_index': 0, 'encrypted': False, 'guest_format': None, 'device_type': 'disk', 'encryption_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'e08f0b9d-adb5-48f3-899f-503d3912e516'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.236 270697 WARNING nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.241 270697 DEBUG nova.virt.libvirt.host [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.242 270697 DEBUG nova.virt.libvirt.host [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.245 270697 DEBUG nova.virt.libvirt.host [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.245 270697 DEBUG nova.virt.libvirt.host [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.245 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.246 270697 DEBUG nova.virt.hardware [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-24T18:48:11Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fa20e92f-7c52-40ac-838f-32e378b8ec04',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b313313bc424bc1da2fd32d986e790f4',container_format='bare',created_at=2025-11-24T18:49:36Z,direct_url=<?>,disk_format='raw',id=e08f0b9d-adb5-48f3-899f-503d3912e516,min_disk=0,min_ram=0,name='tempest-image-dependency-test-290862993',owner='0d692fe6fe5e446c86fe7152afbbaa17',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-11-24T18:49:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.246 270697 DEBUG nova.virt.hardware [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.247 270697 DEBUG nova.virt.hardware [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.247 270697 DEBUG nova.virt.hardware [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.247 270697 DEBUG nova.virt.hardware [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.247 270697 DEBUG nova.virt.hardware [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.248 270697 DEBUG nova.virt.hardware [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.248 270697 DEBUG nova.virt.hardware [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.248 270697 DEBUG nova.virt.hardware [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.248 270697 DEBUG nova.virt.hardware [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.249 270697 DEBUG nova.virt.hardware [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.252 270697 DEBUG oslo_concurrency.processutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:49:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:49:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:49:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:49:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:49:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Nov 24 13:49:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:49:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 24 13:49:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:49:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:49:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:49:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663034365435958 of space, bias 1.0, pg target 0.19989103096307873 quantized to 32 (current 32)
Nov 24 13:49:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:49:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:49:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:49:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:49:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:49:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:49:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:49:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:49:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:49:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:49:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:49:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:49:43 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1048: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 4.0 KiB/s wr, 73 op/s
Nov 24 13:49:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 24 13:49:43 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2024916754' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.657 270697 DEBUG oslo_concurrency.processutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.678 270697 DEBUG nova.storage.rbd_utils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] rbd image 3226af13-afcf-47ff-91b3-2ccec9def10d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 13:49:43 np0005533938 nova_compute[270693]: 2025-11-24 18:49:43.682 270697 DEBUG oslo_concurrency.processutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:49:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 24 13:49:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1490985699' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 24 13:49:44 np0005533938 nova_compute[270693]: 2025-11-24 18:49:44.110 270697 DEBUG oslo_concurrency.processutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:49:44 np0005533938 nova_compute[270693]: 2025-11-24 18:49:44.112 270697 DEBUG nova.objects.instance [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3226af13-afcf-47ff-91b3-2ccec9def10d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 13:49:44 np0005533938 nova_compute[270693]: 2025-11-24 18:49:44.134 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] End _get_guest_xml xml=<domain type="kvm">
Nov 24 13:49:44 np0005533938 nova_compute[270693]:  <uuid>3226af13-afcf-47ff-91b3-2ccec9def10d</uuid>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:  <name>instance-00000002</name>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:  <memory>131072</memory>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:  <vcpu>1</vcpu>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:  <metadata>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <nova:name>instance-depend-image</nova:name>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <nova:creationTime>2025-11-24 18:49:43</nova:creationTime>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <nova:flavor name="m1.nano">
Nov 24 13:49:44 np0005533938 nova_compute[270693]:        <nova:memory>128</nova:memory>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:        <nova:disk>1</nova:disk>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:        <nova:swap>0</nova:swap>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:        <nova:ephemeral>0</nova:ephemeral>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:        <nova:vcpus>1</nova:vcpus>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      </nova:flavor>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <nova:owner>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:        <nova:user uuid="c5033dc71ef0458982cc0f8121662150">tempest-ImageDependencyTests-981399736-project-member</nova:user>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:        <nova:project uuid="0d692fe6fe5e446c86fe7152afbbaa17">tempest-ImageDependencyTests-981399736</nova:project>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      </nova:owner>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <nova:root type="image" uuid="e08f0b9d-adb5-48f3-899f-503d3912e516"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <nova:ports/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    </nova:instance>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:  </metadata>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:  <sysinfo type="smbios">
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <system>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <entry name="manufacturer">RDO</entry>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <entry name="product">OpenStack Compute</entry>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <entry name="serial">3226af13-afcf-47ff-91b3-2ccec9def10d</entry>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <entry name="uuid">3226af13-afcf-47ff-91b3-2ccec9def10d</entry>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <entry name="family">Virtual Machine</entry>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    </system>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:  </sysinfo>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:  <os>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <boot dev="hd"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <smbios mode="sysinfo"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:  </os>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:  <features>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <acpi/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <apic/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <vmcoreinfo/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:  </features>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:  <clock offset="utc">
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <timer name="pit" tickpolicy="delay"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <timer name="hpet" present="no"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:  </clock>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:  <cpu mode="host-model" match="exact">
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <topology sockets="1" cores="1" threads="1"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:  </cpu>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:  <devices>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <disk type="network" device="disk">
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <driver type="raw" cache="none"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <source protocol="rbd" name="vms/3226af13-afcf-47ff-91b3-2ccec9def10d_disk">
Nov 24 13:49:44 np0005533938 nova_compute[270693]:        <host name="192.168.122.100" port="6789"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      </source>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <auth username="openstack">
Nov 24 13:49:44 np0005533938 nova_compute[270693]:        <secret type="ceph" uuid="e5ee928f-099b-569b-93c9-ecf025cbb50d"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      </auth>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <target dev="vda" bus="virtio"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    </disk>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <disk type="network" device="cdrom">
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <driver type="raw" cache="none"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <source protocol="rbd" name="vms/3226af13-afcf-47ff-91b3-2ccec9def10d_disk.config">
Nov 24 13:49:44 np0005533938 nova_compute[270693]:        <host name="192.168.122.100" port="6789"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      </source>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <auth username="openstack">
Nov 24 13:49:44 np0005533938 nova_compute[270693]:        <secret type="ceph" uuid="e5ee928f-099b-569b-93c9-ecf025cbb50d"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      </auth>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <target dev="sda" bus="sata"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    </disk>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <serial type="pty">
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <log file="/var/lib/nova/instances/3226af13-afcf-47ff-91b3-2ccec9def10d/console.log" append="off"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    </serial>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <video>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <model type="virtio"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    </video>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <input type="tablet" bus="usb"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <rng model="virtio">
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <backend model="random">/dev/urandom</backend>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    </rng>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="pci" model="pcie-root-port"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <controller type="usb" index="0"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    <memballoon model="virtio">
Nov 24 13:49:44 np0005533938 nova_compute[270693]:      <stats period="10"/>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:    </memballoon>
Nov 24 13:49:44 np0005533938 nova_compute[270693]:  </devices>
Nov 24 13:49:44 np0005533938 nova_compute[270693]: </domain>
Nov 24 13:49:44 np0005533938 nova_compute[270693]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 24 13:49:44 np0005533938 nova_compute[270693]: 2025-11-24 18:49:44.191 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 24 13:49:44 np0005533938 nova_compute[270693]: 2025-11-24 18:49:44.192 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 24 13:49:44 np0005533938 nova_compute[270693]: 2025-11-24 18:49:44.192 270697 INFO nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Using config drive#033[00m
Nov 24 13:49:44 np0005533938 nova_compute[270693]: 2025-11-24 18:49:44.215 270697 DEBUG nova.storage.rbd_utils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] rbd image 3226af13-afcf-47ff-91b3-2ccec9def10d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 13:49:44 np0005533938 nova_compute[270693]: 2025-11-24 18:49:44.379 270697 INFO nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Creating config drive at /var/lib/nova/instances/3226af13-afcf-47ff-91b3-2ccec9def10d/disk.config#033[00m
Nov 24 13:49:44 np0005533938 nova_compute[270693]: 2025-11-24 18:49:44.386 270697 DEBUG oslo_concurrency.processutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3226af13-afcf-47ff-91b3-2ccec9def10d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxm1dxv00 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:49:44 np0005533938 nova_compute[270693]: 2025-11-24 18:49:44.510 270697 DEBUG oslo_concurrency.processutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3226af13-afcf-47ff-91b3-2ccec9def10d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxm1dxv00" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:49:44 np0005533938 nova_compute[270693]: 2025-11-24 18:49:44.535 270697 DEBUG nova.storage.rbd_utils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] rbd image 3226af13-afcf-47ff-91b3-2ccec9def10d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 24 13:49:44 np0005533938 nova_compute[270693]: 2025-11-24 18:49:44.538 270697 DEBUG oslo_concurrency.processutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3226af13-afcf-47ff-91b3-2ccec9def10d/disk.config 3226af13-afcf-47ff-91b3-2ccec9def10d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:49:44 np0005533938 nova_compute[270693]: 2025-11-24 18:49:44.682 270697 DEBUG oslo_concurrency.processutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3226af13-afcf-47ff-91b3-2ccec9def10d/disk.config 3226af13-afcf-47ff-91b3-2ccec9def10d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:49:44 np0005533938 nova_compute[270693]: 2025-11-24 18:49:44.683 270697 INFO nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Deleting local config drive /var/lib/nova/instances/3226af13-afcf-47ff-91b3-2ccec9def10d/disk.config because it was imported into RBD.#033[00m
Nov 24 13:49:44 np0005533938 systemd-machined[232503]: New machine qemu-2-instance-00000002.
Nov 24 13:49:44 np0005533938 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Nov 24 13:49:44 np0005533938 nova_compute[270693]: 2025-11-24 18:49:44.996 270697 DEBUG nova.virt.driver [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Emitting event <LifecycleEvent: 1764010184.9956279, 3226af13-afcf-47ff-91b3-2ccec9def10d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 13:49:44 np0005533938 nova_compute[270693]: 2025-11-24 18:49:44.996 270697 INFO nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] VM Resumed (Lifecycle Event)#033[00m
Nov 24 13:49:44 np0005533938 nova_compute[270693]: 2025-11-24 18:49:44.998 270697 DEBUG nova.compute.manager [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 24 13:49:44 np0005533938 nova_compute[270693]: 2025-11-24 18:49:44.998 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 24 13:49:45 np0005533938 nova_compute[270693]: 2025-11-24 18:49:45.001 270697 INFO nova.virt.libvirt.driver [-] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Instance spawned successfully.#033[00m
Nov 24 13:49:45 np0005533938 nova_compute[270693]: 2025-11-24 18:49:45.001 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 24 13:49:45 np0005533938 nova_compute[270693]: 2025-11-24 18:49:45.019 270697 DEBUG nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 13:49:45 np0005533938 nova_compute[270693]: 2025-11-24 18:49:45.022 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 13:49:45 np0005533938 nova_compute[270693]: 2025-11-24 18:49:45.022 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 13:49:45 np0005533938 nova_compute[270693]: 2025-11-24 18:49:45.022 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 13:49:45 np0005533938 nova_compute[270693]: 2025-11-24 18:49:45.023 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 13:49:45 np0005533938 nova_compute[270693]: 2025-11-24 18:49:45.023 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 13:49:45 np0005533938 nova_compute[270693]: 2025-11-24 18:49:45.023 270697 DEBUG nova.virt.libvirt.driver [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 24 13:49:45 np0005533938 nova_compute[270693]: 2025-11-24 18:49:45.026 270697 DEBUG nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 24 13:49:45 np0005533938 nova_compute[270693]: 2025-11-24 18:49:45.088 270697 INFO nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 24 13:49:45 np0005533938 nova_compute[270693]: 2025-11-24 18:49:45.088 270697 DEBUG nova.virt.driver [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] Emitting event <LifecycleEvent: 1764010184.9972782, 3226af13-afcf-47ff-91b3-2ccec9def10d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 13:49:45 np0005533938 nova_compute[270693]: 2025-11-24 18:49:45.088 270697 INFO nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] VM Started (Lifecycle Event)#033[00m
Nov 24 13:49:45 np0005533938 nova_compute[270693]: 2025-11-24 18:49:45.123 270697 DEBUG nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 13:49:45 np0005533938 nova_compute[270693]: 2025-11-24 18:49:45.124 270697 INFO nova.compute.manager [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Took 2.64 seconds to spawn the instance on the hypervisor.#033[00m
Nov 24 13:49:45 np0005533938 nova_compute[270693]: 2025-11-24 18:49:45.125 270697 DEBUG nova.compute.manager [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 13:49:45 np0005533938 nova_compute[270693]: 2025-11-24 18:49:45.135 270697 DEBUG nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 24 13:49:45 np0005533938 nova_compute[270693]: 2025-11-24 18:49:45.175 270697 INFO nova.compute.manager [None req-c355d412-63ff-4d3d-897f-c7681c74da67 - - - - - -] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 24 13:49:45 np0005533938 nova_compute[270693]: 2025-11-24 18:49:45.210 270697 INFO nova.compute.manager [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Took 3.61 seconds to build instance.#033[00m
Nov 24 13:49:45 np0005533938 nova_compute[270693]: 2025-11-24 18:49:45.237 270697 DEBUG oslo_concurrency.lockutils [None req-b7984f00-c6dc-4dbd-b73e-99cff9fd7e8c c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "3226af13-afcf-47ff-91b3-2ccec9def10d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 3.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:49:45 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1049: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 5.4 KiB/s wr, 119 op/s
Nov 24 13:49:46 np0005533938 nova_compute[270693]: 2025-11-24 18:49:46.652 270697 DEBUG nova.compute.manager [None req-9a8507b7-ff88-4ba4-b22b-04c2d47afef2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 13:49:46 np0005533938 nova_compute[270693]: 2025-11-24 18:49:46.704 270697 INFO nova.compute.manager [None req-9a8507b7-ff88-4ba4-b22b-04c2d47afef2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] instance snapshotting#033[00m
Nov 24 13:49:46 np0005533938 nova_compute[270693]: 2025-11-24 18:49:46.962 270697 INFO nova.virt.libvirt.driver [None req-9a8507b7-ff88-4ba4-b22b-04c2d47afef2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Beginning live snapshot process#033[00m
Nov 24 13:49:47 np0005533938 nova_compute[270693]: 2025-11-24 18:49:47.111 270697 DEBUG nova.storage.rbd_utils [None req-9a8507b7-ff88-4ba4-b22b-04c2d47afef2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] creating snapshot(8d9fc30f3e21495986539262bbd5d8d3) on rbd image(3226af13-afcf-47ff-91b3-2ccec9def10d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 24 13:49:47 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1050: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 4.3 KiB/s wr, 95 op/s
Nov 24 13:49:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:49:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Nov 24 13:49:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Nov 24 13:49:47 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Nov 24 13:49:47 np0005533938 nova_compute[270693]: 2025-11-24 18:49:47.996 270697 DEBUG nova.storage.rbd_utils [None req-9a8507b7-ff88-4ba4-b22b-04c2d47afef2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] cloning vms/3226af13-afcf-47ff-91b3-2ccec9def10d_disk@8d9fc30f3e21495986539262bbd5d8d3 to images/d89c4af4-ef58-418d-a436-2c65c07ddebe clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 24 13:49:48 np0005533938 nova_compute[270693]: 2025-11-24 18:49:48.114 270697 DEBUG nova.storage.rbd_utils [None req-9a8507b7-ff88-4ba4-b22b-04c2d47afef2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] flattening images/d89c4af4-ef58-418d-a436-2c65c07ddebe flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 24 13:49:48 np0005533938 nova_compute[270693]: 2025-11-24 18:49:48.274 270697 DEBUG nova.storage.rbd_utils [None req-9a8507b7-ff88-4ba4-b22b-04c2d47afef2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] removing snapshot(8d9fc30f3e21495986539262bbd5d8d3) on rbd image(3226af13-afcf-47ff-91b3-2ccec9def10d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 24 13:49:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Nov 24 13:49:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Nov 24 13:49:48 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Nov 24 13:49:49 np0005533938 nova_compute[270693]: 2025-11-24 18:49:49.026 270697 DEBUG nova.storage.rbd_utils [None req-9a8507b7-ff88-4ba4-b22b-04c2d47afef2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] creating snapshot(snap) on rbd image(d89c4af4-ef58-418d-a436-2c65c07ddebe) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 24 13:49:49 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1053: 321 pgs: 321 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 25 KiB/s wr, 85 op/s
Nov 24 13:49:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Nov 24 13:49:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Nov 24 13:49:49 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Nov 24 13:49:51 np0005533938 nova_compute[270693]: 2025-11-24 18:49:51.405 270697 INFO nova.virt.libvirt.driver [None req-9a8507b7-ff88-4ba4-b22b-04c2d47afef2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Snapshot image upload complete#033[00m
Nov 24 13:49:51 np0005533938 nova_compute[270693]: 2025-11-24 18:49:51.406 270697 INFO nova.compute.manager [None req-9a8507b7-ff88-4ba4-b22b-04c2d47afef2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Took 4.70 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 24 13:49:51 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1055: 321 pgs: 321 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 29 KiB/s wr, 128 op/s
Nov 24 13:49:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:49:53 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1056: 321 pgs: 321 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 29 KiB/s wr, 135 op/s
Nov 24 13:49:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Nov 24 13:49:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Nov 24 13:49:54 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Nov 24 13:49:54 np0005533938 nova_compute[270693]: 2025-11-24 18:49:54.908 270697 DEBUG oslo_concurrency.lockutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "3226af13-afcf-47ff-91b3-2ccec9def10d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:49:54 np0005533938 nova_compute[270693]: 2025-11-24 18:49:54.908 270697 DEBUG oslo_concurrency.lockutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "3226af13-afcf-47ff-91b3-2ccec9def10d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:49:54 np0005533938 nova_compute[270693]: 2025-11-24 18:49:54.909 270697 DEBUG oslo_concurrency.lockutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "3226af13-afcf-47ff-91b3-2ccec9def10d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:49:54 np0005533938 nova_compute[270693]: 2025-11-24 18:49:54.909 270697 DEBUG oslo_concurrency.lockutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "3226af13-afcf-47ff-91b3-2ccec9def10d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:49:54 np0005533938 nova_compute[270693]: 2025-11-24 18:49:54.909 270697 DEBUG oslo_concurrency.lockutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "3226af13-afcf-47ff-91b3-2ccec9def10d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:49:54 np0005533938 nova_compute[270693]: 2025-11-24 18:49:54.910 270697 INFO nova.compute.manager [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Terminating instance#033[00m
Nov 24 13:49:54 np0005533938 nova_compute[270693]: 2025-11-24 18:49:54.911 270697 DEBUG oslo_concurrency.lockutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "refresh_cache-3226af13-afcf-47ff-91b3-2ccec9def10d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 13:49:54 np0005533938 nova_compute[270693]: 2025-11-24 18:49:54.911 270697 DEBUG oslo_concurrency.lockutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquired lock "refresh_cache-3226af13-afcf-47ff-91b3-2ccec9def10d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 13:49:54 np0005533938 nova_compute[270693]: 2025-11-24 18:49:54.911 270697 DEBUG nova.network.neutron [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 24 13:49:55 np0005533938 nova_compute[270693]: 2025-11-24 18:49:55.350 270697 DEBUG nova.network.neutron [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 24 13:49:55 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1058: 321 pgs: 321 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 7.0 KiB/s wr, 188 op/s
Nov 24 13:49:55 np0005533938 nova_compute[270693]: 2025-11-24 18:49:55.650 270697 DEBUG nova.network.neutron [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 13:49:55 np0005533938 nova_compute[270693]: 2025-11-24 18:49:55.668 270697 DEBUG oslo_concurrency.lockutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Releasing lock "refresh_cache-3226af13-afcf-47ff-91b3-2ccec9def10d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 13:49:55 np0005533938 nova_compute[270693]: 2025-11-24 18:49:55.669 270697 DEBUG nova.compute.manager [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 24 13:49:55 np0005533938 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Nov 24 13:49:55 np0005533938 systemd-machined[232503]: Machine qemu-2-instance-00000002 terminated.
Nov 24 13:49:55 np0005533938 nova_compute[270693]: 2025-11-24 18:49:55.888 270697 INFO nova.virt.libvirt.driver [-] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Instance destroyed successfully.#033[00m
Nov 24 13:49:55 np0005533938 nova_compute[270693]: 2025-11-24 18:49:55.888 270697 DEBUG nova.objects.instance [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lazy-loading 'resources' on Instance uuid 3226af13-afcf-47ff-91b3-2ccec9def10d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 13:49:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Nov 24 13:49:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Nov 24 13:49:57 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Nov 24 13:49:57 np0005533938 nova_compute[270693]: 2025-11-24 18:49:57.302 270697 INFO nova.virt.libvirt.driver [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Deleting instance files /var/lib/nova/instances/3226af13-afcf-47ff-91b3-2ccec9def10d_del#033[00m
Nov 24 13:49:57 np0005533938 nova_compute[270693]: 2025-11-24 18:49:57.302 270697 INFO nova.virt.libvirt.driver [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Deletion of /var/lib/nova/instances/3226af13-afcf-47ff-91b3-2ccec9def10d_del complete#033[00m
Nov 24 13:49:57 np0005533938 nova_compute[270693]: 2025-11-24 18:49:57.426 270697 DEBUG nova.virt.libvirt.host [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Nov 24 13:49:57 np0005533938 nova_compute[270693]: 2025-11-24 18:49:57.427 270697 INFO nova.virt.libvirt.host [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] UEFI support detected#033[00m
Nov 24 13:49:57 np0005533938 nova_compute[270693]: 2025-11-24 18:49:57.429 270697 INFO nova.compute.manager [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Took 1.76 seconds to destroy the instance on the hypervisor.#033[00m
Nov 24 13:49:57 np0005533938 nova_compute[270693]: 2025-11-24 18:49:57.430 270697 DEBUG oslo.service.loopingcall [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 24 13:49:57 np0005533938 nova_compute[270693]: 2025-11-24 18:49:57.430 270697 DEBUG nova.compute.manager [-] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 24 13:49:57 np0005533938 nova_compute[270693]: 2025-11-24 18:49:57.430 270697 DEBUG nova.network.neutron [-] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 24 13:49:57 np0005533938 nova_compute[270693]: 2025-11-24 18:49:57.556 270697 DEBUG nova.network.neutron [-] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 24 13:49:57 np0005533938 nova_compute[270693]: 2025-11-24 18:49:57.580 270697 DEBUG nova.network.neutron [-] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 13:49:57 np0005533938 nova_compute[270693]: 2025-11-24 18:49:57.600 270697 INFO nova.compute.manager [-] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Took 0.17 seconds to deallocate network for instance.#033[00m
Nov 24 13:49:57 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1060: 321 pgs: 321 active+clean; 42 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.3 KiB/s wr, 77 op/s
Nov 24 13:49:57 np0005533938 nova_compute[270693]: 2025-11-24 18:49:57.664 270697 DEBUG oslo_concurrency.lockutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:49:57 np0005533938 nova_compute[270693]: 2025-11-24 18:49:57.665 270697 DEBUG oslo_concurrency.lockutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:49:57 np0005533938 nova_compute[270693]: 2025-11-24 18:49:57.754 270697 DEBUG oslo_concurrency.processutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:49:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:49:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Nov 24 13:49:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Nov 24 13:49:57 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Nov 24 13:49:58 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:49:58 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1665544519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:49:58 np0005533938 nova_compute[270693]: 2025-11-24 18:49:58.208 270697 DEBUG oslo_concurrency.processutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:49:58 np0005533938 nova_compute[270693]: 2025-11-24 18:49:58.213 270697 DEBUG nova.compute.provider_tree [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 13:49:58 np0005533938 nova_compute[270693]: 2025-11-24 18:49:58.228 270697 DEBUG nova.scheduler.client.report [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 13:49:58 np0005533938 nova_compute[270693]: 2025-11-24 18:49:58.252 270697 DEBUG oslo_concurrency.lockutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:49:58 np0005533938 nova_compute[270693]: 2025-11-24 18:49:58.277 270697 INFO nova.scheduler.client.report [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Deleted allocations for instance 3226af13-afcf-47ff-91b3-2ccec9def10d#033[00m
Nov 24 13:49:58 np0005533938 nova_compute[270693]: 2025-11-24 18:49:58.347 270697 DEBUG oslo_concurrency.lockutils [None req-8d9bcd08-6d5f-4ae6-9b99-a71e0b2b53d2 c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "3226af13-afcf-47ff-91b3-2ccec9def10d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:49:59 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1062: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 4.8 KiB/s wr, 132 op/s
Nov 24 13:49:59 np0005533938 nova_compute[270693]: 2025-11-24 18:49:59.767 270697 DEBUG oslo_concurrency.lockutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "81f5edb9-2756-4a6e-bc3a-fa770161d562" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:49:59 np0005533938 nova_compute[270693]: 2025-11-24 18:49:59.767 270697 DEBUG oslo_concurrency.lockutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "81f5edb9-2756-4a6e-bc3a-fa770161d562" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:49:59 np0005533938 nova_compute[270693]: 2025-11-24 18:49:59.768 270697 DEBUG oslo_concurrency.lockutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "81f5edb9-2756-4a6e-bc3a-fa770161d562-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:49:59 np0005533938 nova_compute[270693]: 2025-11-24 18:49:59.768 270697 DEBUG oslo_concurrency.lockutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "81f5edb9-2756-4a6e-bc3a-fa770161d562-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:49:59 np0005533938 nova_compute[270693]: 2025-11-24 18:49:59.768 270697 DEBUG oslo_concurrency.lockutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "81f5edb9-2756-4a6e-bc3a-fa770161d562-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:49:59 np0005533938 nova_compute[270693]: 2025-11-24 18:49:59.769 270697 INFO nova.compute.manager [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Terminating instance#033[00m
Nov 24 13:49:59 np0005533938 nova_compute[270693]: 2025-11-24 18:49:59.770 270697 DEBUG oslo_concurrency.lockutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "refresh_cache-81f5edb9-2756-4a6e-bc3a-fa770161d562" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 24 13:49:59 np0005533938 nova_compute[270693]: 2025-11-24 18:49:59.770 270697 DEBUG oslo_concurrency.lockutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquired lock "refresh_cache-81f5edb9-2756-4a6e-bc3a-fa770161d562" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 24 13:49:59 np0005533938 nova_compute[270693]: 2025-11-24 18:49:59.771 270697 DEBUG nova.network.neutron [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 24 13:50:00 np0005533938 nova_compute[270693]: 2025-11-24 18:50:00.654 270697 DEBUG nova.network.neutron [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 24 13:50:00 np0005533938 nova_compute[270693]: 2025-11-24 18:50:00.904 270697 DEBUG nova.network.neutron [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 13:50:00 np0005533938 nova_compute[270693]: 2025-11-24 18:50:00.921 270697 DEBUG oslo_concurrency.lockutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Releasing lock "refresh_cache-81f5edb9-2756-4a6e-bc3a-fa770161d562" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 24 13:50:00 np0005533938 nova_compute[270693]: 2025-11-24 18:50:00.921 270697 DEBUG nova.compute.manager [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 24 13:50:00 np0005533938 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 24 13:50:00 np0005533938 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 1.155s CPU time.
Nov 24 13:50:00 np0005533938 systemd-machined[232503]: Machine qemu-1-instance-00000001 terminated.
Nov 24 13:50:01 np0005533938 nova_compute[270693]: 2025-11-24 18:50:01.145 270697 INFO nova.virt.libvirt.driver [-] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Instance destroyed successfully.#033[00m
Nov 24 13:50:01 np0005533938 nova_compute[270693]: 2025-11-24 18:50:01.145 270697 DEBUG nova.objects.instance [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lazy-loading 'resources' on Instance uuid 81f5edb9-2756-4a6e-bc3a-fa770161d562 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 24 13:50:01 np0005533938 nova_compute[270693]: 2025-11-24 18:50:01.341 270697 INFO nova.virt.libvirt.driver [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Deleting instance files /var/lib/nova/instances/81f5edb9-2756-4a6e-bc3a-fa770161d562_del#033[00m
Nov 24 13:50:01 np0005533938 nova_compute[270693]: 2025-11-24 18:50:01.342 270697 INFO nova.virt.libvirt.driver [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Deletion of /var/lib/nova/instances/81f5edb9-2756-4a6e-bc3a-fa770161d562_del complete#033[00m
Nov 24 13:50:01 np0005533938 nova_compute[270693]: 2025-11-24 18:50:01.513 270697 INFO nova.compute.manager [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Nov 24 13:50:01 np0005533938 nova_compute[270693]: 2025-11-24 18:50:01.514 270697 DEBUG oslo.service.loopingcall [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 24 13:50:01 np0005533938 nova_compute[270693]: 2025-11-24 18:50:01.514 270697 DEBUG nova.compute.manager [-] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 24 13:50:01 np0005533938 nova_compute[270693]: 2025-11-24 18:50:01.515 270697 DEBUG nova.network.neutron [-] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 24 13:50:01 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1063: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 5.0 KiB/s wr, 126 op/s
Nov 24 13:50:01 np0005533938 nova_compute[270693]: 2025-11-24 18:50:01.745 270697 DEBUG nova.network.neutron [-] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 24 13:50:01 np0005533938 nova_compute[270693]: 2025-11-24 18:50:01.755 270697 DEBUG nova.network.neutron [-] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 24 13:50:01 np0005533938 nova_compute[270693]: 2025-11-24 18:50:01.766 270697 INFO nova.compute.manager [-] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Took 0.25 seconds to deallocate network for instance.#033[00m
Nov 24 13:50:02 np0005533938 nova_compute[270693]: 2025-11-24 18:50:02.063 270697 INFO nova.compute.manager [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Took 0.30 seconds to detach 1 volumes for instance.#033[00m
Nov 24 13:50:02 np0005533938 nova_compute[270693]: 2025-11-24 18:50:02.065 270697 DEBUG nova.compute.manager [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Deleting volume: 57bd14c1-40c4-42ca-854f-95f89e621d53 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Nov 24 13:50:02 np0005533938 nova_compute[270693]: 2025-11-24 18:50:02.366 270697 DEBUG oslo_concurrency.lockutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:50:02 np0005533938 nova_compute[270693]: 2025-11-24 18:50:02.367 270697 DEBUG oslo_concurrency.lockutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:50:02 np0005533938 nova_compute[270693]: 2025-11-24 18:50:02.409 270697 DEBUG oslo_concurrency.processutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:50:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:50:02 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1715933528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:50:02 np0005533938 nova_compute[270693]: 2025-11-24 18:50:02.847 270697 DEBUG oslo_concurrency.processutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:50:02 np0005533938 nova_compute[270693]: 2025-11-24 18:50:02.854 270697 DEBUG nova.compute.provider_tree [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 13:50:02 np0005533938 nova_compute[270693]: 2025-11-24 18:50:02.883 270697 DEBUG nova.scheduler.client.report [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 13:50:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:50:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Nov 24 13:50:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Nov 24 13:50:02 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Nov 24 13:50:02 np0005533938 nova_compute[270693]: 2025-11-24 18:50:02.919 270697 DEBUG oslo_concurrency.lockutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:50:02 np0005533938 nova_compute[270693]: 2025-11-24 18:50:02.962 270697 INFO nova.scheduler.client.report [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Deleted allocations for instance 81f5edb9-2756-4a6e-bc3a-fa770161d562#033[00m
Nov 24 13:50:03 np0005533938 nova_compute[270693]: 2025-11-24 18:50:03.053 270697 DEBUG oslo_concurrency.lockutils [None req-c55dfe34-f29b-444c-83e7-a47f3a2d64cf c5033dc71ef0458982cc0f8121662150 0d692fe6fe5e446c86fe7152afbbaa17 - - default default] Lock "81f5edb9-2756-4a6e-bc3a-fa770161d562" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.285s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:50:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 13:50:03 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2977381307' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 13:50:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 13:50:03 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2977381307' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 13:50:03 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1065: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 3.8 KiB/s wr, 95 op/s
Nov 24 13:50:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:50:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:50:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:50:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:50:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:50:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:50:05 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:50:05.500 179763 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:2b:64', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fa:26:5b:32:fa:ba'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 13:50:05 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:50:05.500 179763 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 24 13:50:05 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 4.4 KiB/s wr, 104 op/s
Nov 24 13:50:06 np0005533938 podman[278309]: 2025-11-24 18:50:06.009107507 +0000 UTC m=+0.103150258 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 13:50:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:50:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:50:06 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:50:06 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:50:07 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:50:07 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:50:07 np0005533938 podman[278612]: 2025-11-24 18:50:07.620796809 +0000 UTC m=+0.092722313 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:50:07 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1067: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 3.6 KiB/s wr, 85 op/s
Nov 24 13:50:07 np0005533938 podman[278613]: 2025-11-24 18:50:07.657307583 +0000 UTC m=+0.127215997 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 13:50:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:50:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Nov 24 13:50:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Nov 24 13:50:07 np0005533938 ceph-mon[74927]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Nov 24 13:50:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:50:07 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:50:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:50:08 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:50:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:50:08 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:50:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:50:08 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:50:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:50:08 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:50:08 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev f9ded4e4-0d95-4074-a0b2-e33220d9f186 does not exist
Nov 24 13:50:08 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 3a440a23-1cb6-4080-bba9-84f7731fb822 does not exist
Nov 24 13:50:08 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev c446aba9-f59d-4157-95d9-64378c2f576f does not exist
Nov 24 13:50:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:50:08 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:50:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:50:08 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:50:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:50:08 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:50:08 np0005533938 podman[278881]: 2025-11-24 18:50:08.709715425 +0000 UTC m=+0.042555724 container create 05a8cf76dc3218ed22a87e0041012e43a220fb8dc764eee6c09a03651301fdd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_robinson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:50:08 np0005533938 systemd[1]: Started libpod-conmon-05a8cf76dc3218ed22a87e0041012e43a220fb8dc764eee6c09a03651301fdd6.scope.
Nov 24 13:50:08 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:50:08 np0005533938 podman[278881]: 2025-11-24 18:50:08.690656648 +0000 UTC m=+0.023496977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:50:08 np0005533938 podman[278881]: 2025-11-24 18:50:08.792151354 +0000 UTC m=+0.124991683 container init 05a8cf76dc3218ed22a87e0041012e43a220fb8dc764eee6c09a03651301fdd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_robinson, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:50:08 np0005533938 podman[278881]: 2025-11-24 18:50:08.799178246 +0000 UTC m=+0.132018545 container start 05a8cf76dc3218ed22a87e0041012e43a220fb8dc764eee6c09a03651301fdd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_robinson, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:50:08 np0005533938 podman[278881]: 2025-11-24 18:50:08.803206645 +0000 UTC m=+0.136046974 container attach 05a8cf76dc3218ed22a87e0041012e43a220fb8dc764eee6c09a03651301fdd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:50:08 np0005533938 loving_robinson[278895]: 167 167
Nov 24 13:50:08 np0005533938 systemd[1]: libpod-05a8cf76dc3218ed22a87e0041012e43a220fb8dc764eee6c09a03651301fdd6.scope: Deactivated successfully.
Nov 24 13:50:08 np0005533938 conmon[278895]: conmon 05a8cf76dc3218ed22a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-05a8cf76dc3218ed22a87e0041012e43a220fb8dc764eee6c09a03651301fdd6.scope/container/memory.events
Nov 24 13:50:08 np0005533938 podman[278900]: 2025-11-24 18:50:08.853326943 +0000 UTC m=+0.030865387 container died 05a8cf76dc3218ed22a87e0041012e43a220fb8dc764eee6c09a03651301fdd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_robinson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 24 13:50:08 np0005533938 systemd[1]: var-lib-containers-storage-overlay-11600026154f137cf41b7177070d7db354e48b8f45530556201467b96bb59a7c-merged.mount: Deactivated successfully.
Nov 24 13:50:08 np0005533938 podman[278900]: 2025-11-24 18:50:08.908046413 +0000 UTC m=+0.085584847 container remove 05a8cf76dc3218ed22a87e0041012e43a220fb8dc764eee6c09a03651301fdd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 24 13:50:08 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:50:08 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:50:08 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:50:08 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:50:08 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:50:08 np0005533938 systemd[1]: libpod-conmon-05a8cf76dc3218ed22a87e0041012e43a220fb8dc764eee6c09a03651301fdd6.scope: Deactivated successfully.
Nov 24 13:50:09 np0005533938 podman[278922]: 2025-11-24 18:50:09.164155107 +0000 UTC m=+0.061485957 container create 2fcad48c8672019af4b97a680a6d4444605581c38ac6cd82bee9bc433d377045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:50:09 np0005533938 systemd[1]: Started libpod-conmon-2fcad48c8672019af4b97a680a6d4444605581c38ac6cd82bee9bc433d377045.scope.
Nov 24 13:50:09 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:50:09 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2941b40273fb6a6c22c67cbf161a06b792aaddef7384e514e04cb115931bfdca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:50:09 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2941b40273fb6a6c22c67cbf161a06b792aaddef7384e514e04cb115931bfdca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:50:09 np0005533938 podman[278922]: 2025-11-24 18:50:09.144047155 +0000 UTC m=+0.041378015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:50:09 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2941b40273fb6a6c22c67cbf161a06b792aaddef7384e514e04cb115931bfdca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:50:09 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2941b40273fb6a6c22c67cbf161a06b792aaddef7384e514e04cb115931bfdca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:50:09 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2941b40273fb6a6c22c67cbf161a06b792aaddef7384e514e04cb115931bfdca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:50:09 np0005533938 podman[278922]: 2025-11-24 18:50:09.257678758 +0000 UTC m=+0.155009588 container init 2fcad48c8672019af4b97a680a6d4444605581c38ac6cd82bee9bc433d377045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_neumann, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:50:09 np0005533938 podman[278922]: 2025-11-24 18:50:09.266213628 +0000 UTC m=+0.163544428 container start 2fcad48c8672019af4b97a680a6d4444605581c38ac6cd82bee9bc433d377045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:50:09 np0005533938 podman[278922]: 2025-11-24 18:50:09.270040151 +0000 UTC m=+0.167370991 container attach 2fcad48c8672019af4b97a680a6d4444605581c38ac6cd82bee9bc433d377045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_neumann, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:50:09 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1069: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.7 KiB/s wr, 52 op/s
Nov 24 13:50:10 np0005533938 hopeful_neumann[278938]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:50:10 np0005533938 hopeful_neumann[278938]: --> relative data size: 1.0
Nov 24 13:50:10 np0005533938 hopeful_neumann[278938]: --> All data devices are unavailable
Nov 24 13:50:10 np0005533938 systemd[1]: libpod-2fcad48c8672019af4b97a680a6d4444605581c38ac6cd82bee9bc433d377045.scope: Deactivated successfully.
Nov 24 13:50:10 np0005533938 podman[278922]: 2025-11-24 18:50:10.331994756 +0000 UTC m=+1.229325566 container died 2fcad48c8672019af4b97a680a6d4444605581c38ac6cd82bee9bc433d377045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_neumann, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:50:10 np0005533938 systemd[1]: libpod-2fcad48c8672019af4b97a680a6d4444605581c38ac6cd82bee9bc433d377045.scope: Consumed 1.012s CPU time.
Nov 24 13:50:10 np0005533938 systemd[1]: var-lib-containers-storage-overlay-2941b40273fb6a6c22c67cbf161a06b792aaddef7384e514e04cb115931bfdca-merged.mount: Deactivated successfully.
Nov 24 13:50:10 np0005533938 podman[278922]: 2025-11-24 18:50:10.380303389 +0000 UTC m=+1.277634199 container remove 2fcad48c8672019af4b97a680a6d4444605581c38ac6cd82bee9bc433d377045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_neumann, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:50:10 np0005533938 systemd[1]: libpod-conmon-2fcad48c8672019af4b97a680a6d4444605581c38ac6cd82bee9bc433d377045.scope: Deactivated successfully.
Nov 24 13:50:10 np0005533938 nova_compute[270693]: 2025-11-24 18:50:10.886 270697 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764010195.8853695, 3226af13-afcf-47ff-91b3-2ccec9def10d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 13:50:10 np0005533938 nova_compute[270693]: 2025-11-24 18:50:10.888 270697 INFO nova.compute.manager [-] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] VM Stopped (Lifecycle Event)#033[00m
Nov 24 13:50:10 np0005533938 nova_compute[270693]: 2025-11-24 18:50:10.917 270697 DEBUG nova.compute.manager [None req-81330878-ede8-46cb-a9d0-6ae08f3ac908 - - - - - -] [instance: 3226af13-afcf-47ff-91b3-2ccec9def10d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 13:50:10 np0005533938 podman[279118]: 2025-11-24 18:50:10.968388406 +0000 UTC m=+0.054260731 container create 27cac7ac9827a0e0ad12ba6676aa769d4ff83f2f780c38c7b70d146967562e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 13:50:11 np0005533938 systemd[1]: Started libpod-conmon-27cac7ac9827a0e0ad12ba6676aa769d4ff83f2f780c38c7b70d146967562e30.scope.
Nov 24 13:50:11 np0005533938 podman[279118]: 2025-11-24 18:50:10.943139217 +0000 UTC m=+0.029011602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:50:11 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:50:11 np0005533938 podman[279118]: 2025-11-24 18:50:11.063431694 +0000 UTC m=+0.149304059 container init 27cac7ac9827a0e0ad12ba6676aa769d4ff83f2f780c38c7b70d146967562e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 13:50:11 np0005533938 podman[279118]: 2025-11-24 18:50:11.074933046 +0000 UTC m=+0.160805371 container start 27cac7ac9827a0e0ad12ba6676aa769d4ff83f2f780c38c7b70d146967562e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 13:50:11 np0005533938 gracious_chaplygin[279135]: 167 167
Nov 24 13:50:11 np0005533938 systemd[1]: libpod-27cac7ac9827a0e0ad12ba6676aa769d4ff83f2f780c38c7b70d146967562e30.scope: Deactivated successfully.
Nov 24 13:50:11 np0005533938 podman[279118]: 2025-11-24 18:50:11.08204936 +0000 UTC m=+0.167921745 container attach 27cac7ac9827a0e0ad12ba6676aa769d4ff83f2f780c38c7b70d146967562e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:50:11 np0005533938 podman[279118]: 2025-11-24 18:50:11.083361242 +0000 UTC m=+0.169233597 container died 27cac7ac9827a0e0ad12ba6676aa769d4ff83f2f780c38c7b70d146967562e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 24 13:50:11 np0005533938 systemd[1]: var-lib-containers-storage-overlay-a1bbd07b927dc7c5b69878d508cccb5f5ce5b59b4278b947f24b66b975fde0b1-merged.mount: Deactivated successfully.
Nov 24 13:50:11 np0005533938 podman[279118]: 2025-11-24 18:50:11.132468185 +0000 UTC m=+0.218340500 container remove 27cac7ac9827a0e0ad12ba6676aa769d4ff83f2f780c38c7b70d146967562e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:50:11 np0005533938 systemd[1]: libpod-conmon-27cac7ac9827a0e0ad12ba6676aa769d4ff83f2f780c38c7b70d146967562e30.scope: Deactivated successfully.
Nov 24 13:50:11 np0005533938 podman[279159]: 2025-11-24 18:50:11.323578007 +0000 UTC m=+0.050840196 container create 817fd049a6ceecfe37cab921184d247277a5b9c1e86f941f3572bbcfb6bf4921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:50:11 np0005533938 systemd[1]: Started libpod-conmon-817fd049a6ceecfe37cab921184d247277a5b9c1e86f941f3572bbcfb6bf4921.scope.
Nov 24 13:50:11 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:50:11 np0005533938 podman[279159]: 2025-11-24 18:50:11.29756449 +0000 UTC m=+0.024826749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:50:11 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e5f81d7ea47eb4cf85ebe4604f5e2657b2d52b4854963f4c9ccc8392c3a89b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:50:11 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e5f81d7ea47eb4cf85ebe4604f5e2657b2d52b4854963f4c9ccc8392c3a89b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:50:11 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e5f81d7ea47eb4cf85ebe4604f5e2657b2d52b4854963f4c9ccc8392c3a89b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:50:11 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e5f81d7ea47eb4cf85ebe4604f5e2657b2d52b4854963f4c9ccc8392c3a89b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:50:11 np0005533938 podman[279159]: 2025-11-24 18:50:11.418438941 +0000 UTC m=+0.145701260 container init 817fd049a6ceecfe37cab921184d247277a5b9c1e86f941f3572bbcfb6bf4921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 13:50:11 np0005533938 podman[279159]: 2025-11-24 18:50:11.430468156 +0000 UTC m=+0.157730375 container start 817fd049a6ceecfe37cab921184d247277a5b9c1e86f941f3572bbcfb6bf4921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mirzakhani, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 24 13:50:11 np0005533938 podman[279159]: 2025-11-24 18:50:11.436410971 +0000 UTC m=+0.163673190 container attach 817fd049a6ceecfe37cab921184d247277a5b9c1e86f941f3572bbcfb6bf4921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mirzakhani, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:50:11 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:50:11.502 179763 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=302e9f34-0427-4ff9-a29b-2fc7b5250666, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 13:50:11 np0005533938 nova_compute[270693]: 2025-11-24 18:50:11.530 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:50:11 np0005533938 nova_compute[270693]: 2025-11-24 18:50:11.530 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 13:50:11 np0005533938 nova_compute[270693]: 2025-11-24 18:50:11.530 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 13:50:11 np0005533938 nova_compute[270693]: 2025-11-24 18:50:11.553 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 13:50:11 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1070: 321 pgs: 321 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.6 KiB/s wr, 48 op/s
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]: {
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:    "0": [
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:        {
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "devices": [
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "/dev/loop3"
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            ],
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "lv_name": "ceph_lv0",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "lv_size": "21470642176",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "name": "ceph_lv0",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "tags": {
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.cluster_name": "ceph",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.crush_device_class": "",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.encrypted": "0",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.osd_id": "0",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.type": "block",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.vdo": "0"
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            },
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "type": "block",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "vg_name": "ceph_vg0"
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:        }
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:    ],
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:    "1": [
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:        {
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "devices": [
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "/dev/loop4"
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            ],
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "lv_name": "ceph_lv1",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "lv_size": "21470642176",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "name": "ceph_lv1",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "tags": {
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.cluster_name": "ceph",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.crush_device_class": "",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.encrypted": "0",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.osd_id": "1",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.type": "block",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.vdo": "0"
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            },
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "type": "block",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "vg_name": "ceph_vg1"
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:        }
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:    ],
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:    "2": [
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:        {
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "devices": [
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "/dev/loop5"
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            ],
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "lv_name": "ceph_lv2",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "lv_size": "21470642176",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "name": "ceph_lv2",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "tags": {
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.cluster_name": "ceph",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.crush_device_class": "",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.encrypted": "0",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.osd_id": "2",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.type": "block",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:                "ceph.vdo": "0"
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            },
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "type": "block",
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:            "vg_name": "ceph_vg2"
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:        }
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]:    ]
Nov 24 13:50:12 np0005533938 reverent_mirzakhani[279175]: }
Nov 24 13:50:12 np0005533938 systemd[1]: libpod-817fd049a6ceecfe37cab921184d247277a5b9c1e86f941f3572bbcfb6bf4921.scope: Deactivated successfully.
Nov 24 13:50:12 np0005533938 podman[279159]: 2025-11-24 18:50:12.163416541 +0000 UTC m=+0.890678730 container died 817fd049a6ceecfe37cab921184d247277a5b9c1e86f941f3572bbcfb6bf4921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:50:12 np0005533938 systemd[1]: var-lib-containers-storage-overlay-50e5f81d7ea47eb4cf85ebe4604f5e2657b2d52b4854963f4c9ccc8392c3a89b-merged.mount: Deactivated successfully.
Nov 24 13:50:12 np0005533938 podman[279159]: 2025-11-24 18:50:12.213951709 +0000 UTC m=+0.941213888 container remove 817fd049a6ceecfe37cab921184d247277a5b9c1e86f941f3572bbcfb6bf4921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mirzakhani, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 24 13:50:12 np0005533938 systemd[1]: libpod-conmon-817fd049a6ceecfe37cab921184d247277a5b9c1e86f941f3572bbcfb6bf4921.scope: Deactivated successfully.
Nov 24 13:50:12 np0005533938 nova_compute[270693]: 2025-11-24 18:50:12.570 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:50:12 np0005533938 nova_compute[270693]: 2025-11-24 18:50:12.571 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:50:12 np0005533938 nova_compute[270693]: 2025-11-24 18:50:12.571 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 13:50:12 np0005533938 nova_compute[270693]: 2025-11-24 18:50:12.571 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:50:12 np0005533938 nova_compute[270693]: 2025-11-24 18:50:12.599 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:50:12 np0005533938 nova_compute[270693]: 2025-11-24 18:50:12.600 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:50:12 np0005533938 nova_compute[270693]: 2025-11-24 18:50:12.601 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:50:12 np0005533938 nova_compute[270693]: 2025-11-24 18:50:12.601 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 13:50:12 np0005533938 nova_compute[270693]: 2025-11-24 18:50:12.601 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:50:12 np0005533938 podman[279356]: 2025-11-24 18:50:12.880096438 +0000 UTC m=+0.058314410 container create 0b324e4c86f1dc41b1a626d0a2af9c859b41a4b395377a04c250a236e6a6381a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:50:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:50:12 np0005533938 systemd[1]: Started libpod-conmon-0b324e4c86f1dc41b1a626d0a2af9c859b41a4b395377a04c250a236e6a6381a.scope.
Nov 24 13:50:12 np0005533938 podman[279356]: 2025-11-24 18:50:12.844519086 +0000 UTC m=+0.022737088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:50:12 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:50:12 np0005533938 podman[279356]: 2025-11-24 18:50:12.977556475 +0000 UTC m=+0.155774517 container init 0b324e4c86f1dc41b1a626d0a2af9c859b41a4b395377a04c250a236e6a6381a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 13:50:12 np0005533938 podman[279356]: 2025-11-24 18:50:12.985836118 +0000 UTC m=+0.164054050 container start 0b324e4c86f1dc41b1a626d0a2af9c859b41a4b395377a04c250a236e6a6381a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_chaplygin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:50:12 np0005533938 serene_chaplygin[279372]: 167 167
Nov 24 13:50:12 np0005533938 systemd[1]: libpod-0b324e4c86f1dc41b1a626d0a2af9c859b41a4b395377a04c250a236e6a6381a.scope: Deactivated successfully.
Nov 24 13:50:12 np0005533938 podman[279356]: 2025-11-24 18:50:12.990373769 +0000 UTC m=+0.168591811 container attach 0b324e4c86f1dc41b1a626d0a2af9c859b41a4b395377a04c250a236e6a6381a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_chaplygin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:50:12 np0005533938 podman[279356]: 2025-11-24 18:50:12.994639464 +0000 UTC m=+0.172857386 container died 0b324e4c86f1dc41b1a626d0a2af9c859b41a4b395377a04c250a236e6a6381a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:50:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:50:13 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2772554347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:50:13 np0005533938 systemd[1]: var-lib-containers-storage-overlay-262e5034cf47a1cfbdf26c13650d6788696e4d78d5d190ab8659f58394862cc2-merged.mount: Deactivated successfully.
Nov 24 13:50:13 np0005533938 nova_compute[270693]: 2025-11-24 18:50:13.024 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:50:13 np0005533938 podman[279356]: 2025-11-24 18:50:13.033290381 +0000 UTC m=+0.211508343 container remove 0b324e4c86f1dc41b1a626d0a2af9c859b41a4b395377a04c250a236e6a6381a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:50:13 np0005533938 systemd[1]: libpod-conmon-0b324e4c86f1dc41b1a626d0a2af9c859b41a4b395377a04c250a236e6a6381a.scope: Deactivated successfully.
Nov 24 13:50:13 np0005533938 nova_compute[270693]: 2025-11-24 18:50:13.188 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 13:50:13 np0005533938 nova_compute[270693]: 2025-11-24 18:50:13.190 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5056MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 13:50:13 np0005533938 nova_compute[270693]: 2025-11-24 18:50:13.190 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:50:13 np0005533938 nova_compute[270693]: 2025-11-24 18:50:13.191 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:50:13 np0005533938 podman[279399]: 2025-11-24 18:50:13.212587263 +0000 UTC m=+0.043765103 container create d141513bee128be76f8cfd47b9c7a5e812fd52f1cc26d9c2c5b815ef973431bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_germain, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:50:13 np0005533938 systemd[1]: Started libpod-conmon-d141513bee128be76f8cfd47b9c7a5e812fd52f1cc26d9c2c5b815ef973431bb.scope.
Nov 24 13:50:13 np0005533938 nova_compute[270693]: 2025-11-24 18:50:13.254 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 13:50:13 np0005533938 nova_compute[270693]: 2025-11-24 18:50:13.254 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 13:50:13 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:50:13 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ed4b036e63f4104d34d27ab7867a9216708831b570220b0c98c49ea749d57c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:50:13 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ed4b036e63f4104d34d27ab7867a9216708831b570220b0c98c49ea749d57c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:50:13 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ed4b036e63f4104d34d27ab7867a9216708831b570220b0c98c49ea749d57c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:50:13 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ed4b036e63f4104d34d27ab7867a9216708831b570220b0c98c49ea749d57c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:50:13 np0005533938 podman[279399]: 2025-11-24 18:50:13.274602212 +0000 UTC m=+0.105780062 container init d141513bee128be76f8cfd47b9c7a5e812fd52f1cc26d9c2c5b815ef973431bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_germain, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 13:50:13 np0005533938 nova_compute[270693]: 2025-11-24 18:50:13.274 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:50:13 np0005533938 podman[279399]: 2025-11-24 18:50:13.286813771 +0000 UTC m=+0.117991621 container start d141513bee128be76f8cfd47b9c7a5e812fd52f1cc26d9c2c5b815ef973431bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 13:50:13 np0005533938 podman[279399]: 2025-11-24 18:50:13.195697409 +0000 UTC m=+0.026875279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:50:13 np0005533938 podman[279399]: 2025-11-24 18:50:13.290042581 +0000 UTC m=+0.121220431 container attach d141513bee128be76f8cfd47b9c7a5e812fd52f1cc26d9c2c5b815ef973431bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_germain, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 24 13:50:13 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1071: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1023 B/s wr, 20 op/s
Nov 24 13:50:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:50:13 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1820872512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:50:13 np0005533938 nova_compute[270693]: 2025-11-24 18:50:13.661 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.386s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:50:13 np0005533938 nova_compute[270693]: 2025-11-24 18:50:13.669 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 13:50:13 np0005533938 nova_compute[270693]: 2025-11-24 18:50:13.694 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 13:50:13 np0005533938 nova_compute[270693]: 2025-11-24 18:50:13.723 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 13:50:13 np0005533938 nova_compute[270693]: 2025-11-24 18:50:13.723 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:50:14 np0005533938 interesting_germain[279415]: {
Nov 24 13:50:14 np0005533938 interesting_germain[279415]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:50:14 np0005533938 interesting_germain[279415]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:50:14 np0005533938 interesting_germain[279415]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:50:14 np0005533938 interesting_germain[279415]:        "osd_id": 0,
Nov 24 13:50:14 np0005533938 interesting_germain[279415]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:50:14 np0005533938 interesting_germain[279415]:        "type": "bluestore"
Nov 24 13:50:14 np0005533938 interesting_germain[279415]:    },
Nov 24 13:50:14 np0005533938 interesting_germain[279415]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:50:14 np0005533938 interesting_germain[279415]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:50:14 np0005533938 interesting_germain[279415]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:50:14 np0005533938 interesting_germain[279415]:        "osd_id": 1,
Nov 24 13:50:14 np0005533938 interesting_germain[279415]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:50:14 np0005533938 interesting_germain[279415]:        "type": "bluestore"
Nov 24 13:50:14 np0005533938 interesting_germain[279415]:    },
Nov 24 13:50:14 np0005533938 interesting_germain[279415]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:50:14 np0005533938 interesting_germain[279415]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:50:14 np0005533938 interesting_germain[279415]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:50:14 np0005533938 interesting_germain[279415]:        "osd_id": 2,
Nov 24 13:50:14 np0005533938 interesting_germain[279415]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:50:14 np0005533938 interesting_germain[279415]:        "type": "bluestore"
Nov 24 13:50:14 np0005533938 interesting_germain[279415]:    }
Nov 24 13:50:14 np0005533938 interesting_germain[279415]: }
Nov 24 13:50:14 np0005533938 systemd[1]: libpod-d141513bee128be76f8cfd47b9c7a5e812fd52f1cc26d9c2c5b815ef973431bb.scope: Deactivated successfully.
Nov 24 13:50:14 np0005533938 podman[279399]: 2025-11-24 18:50:14.269225067 +0000 UTC m=+1.100402907 container died d141513bee128be76f8cfd47b9c7a5e812fd52f1cc26d9c2c5b815ef973431bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_germain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:50:14 np0005533938 systemd[1]: var-lib-containers-storage-overlay-13ed4b036e63f4104d34d27ab7867a9216708831b570220b0c98c49ea749d57c-merged.mount: Deactivated successfully.
Nov 24 13:50:14 np0005533938 podman[279399]: 2025-11-24 18:50:14.31628255 +0000 UTC m=+1.147460390 container remove d141513bee128be76f8cfd47b9c7a5e812fd52f1cc26d9c2c5b815ef973431bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_germain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 24 13:50:14 np0005533938 systemd[1]: libpod-conmon-d141513bee128be76f8cfd47b9c7a5e812fd52f1cc26d9c2c5b815ef973431bb.scope: Deactivated successfully.
Nov 24 13:50:14 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:50:14 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:50:14 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:50:14 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:50:14 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev c9ba5153-d2c9-47f9-aa83-cce357d7f708 does not exist
Nov 24 13:50:14 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 526395e9-cc88-48bd-ac3d-c2ca670201e0 does not exist
Nov 24 13:50:14 np0005533938 nova_compute[270693]: 2025-11-24 18:50:14.681 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:50:14 np0005533938 nova_compute[270693]: 2025-11-24 18:50:14.682 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:50:15 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:50:15 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:50:15 np0005533938 nova_compute[270693]: 2025-11-24 18:50:15.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:50:15 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1072: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:50:16 np0005533938 nova_compute[270693]: 2025-11-24 18:50:16.144 270697 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764010201.142717, 81f5edb9-2756-4a6e-bc3a-fa770161d562 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 24 13:50:16 np0005533938 nova_compute[270693]: 2025-11-24 18:50:16.145 270697 INFO nova.compute.manager [-] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] VM Stopped (Lifecycle Event)#033[00m
Nov 24 13:50:16 np0005533938 nova_compute[270693]: 2025-11-24 18:50:16.170 270697 DEBUG nova.compute.manager [None req-032a844a-59f2-47a3-a8ff-742d66f3edfc - - - - - -] [instance: 81f5edb9-2756-4a6e-bc3a-fa770161d562] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 24 13:50:16 np0005533938 nova_compute[270693]: 2025-11-24 18:50:16.530 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:50:16 np0005533938 nova_compute[270693]: 2025-11-24 18:50:16.531 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:50:17 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1073: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:50:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:50:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 13:50:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3451871078' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 13:50:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 13:50:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3451871078' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 13:50:19 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1074: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:50:21 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1075: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:50:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:50:22.744 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:50:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:50:22.745 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:50:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:50:22.745 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:50:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:50:23 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1076: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:50:25 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1077: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:50:27 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1078: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:50:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:50:29 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1079: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:50:31 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1080: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:50:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:50:33 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1081: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:50:33 np0005533938 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:50:33 np0005533938 ceph-osd[88544]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 6867 writes, 27K keys, 6867 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6867 writes, 1384 syncs, 4.96 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1285 writes, 3543 keys, 1285 commit groups, 1.0 writes per commit group, ingest: 1.95 MB, 0.00 MB/s#012Interval WAL: 1285 writes, 527 syncs, 2.44 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 13:50:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:50:34
Nov 24 13:50:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:50:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:50:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['vms', 'images', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', '.mgr', 'default.rgw.log']
Nov 24 13:50:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:50:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:50:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:50:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:50:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:50:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:50:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:50:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:50:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:50:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:50:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:50:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:50:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:50:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:50:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:50:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:50:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:50:35 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1082: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:50:37 np0005533938 podman[279530]: 2025-11-24 18:50:37.007618867 +0000 UTC m=+0.080083153 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 13:50:37 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1083: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:50:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:50:37 np0005533938 podman[279555]: 2025-11-24 18:50:37.959624169 +0000 UTC m=+0.051528694 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 24 13:50:37 np0005533938 podman[279556]: 2025-11-24 18:50:37.972656918 +0000 UTC m=+0.062772849 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 24 13:50:39 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1084: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:50:40 np0005533938 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:50:40 np0005533938 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1801.0 total, 600.0 interval#012Cumulative writes: 8591 writes, 32K keys, 8591 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8591 writes, 2012 syncs, 4.27 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1906 writes, 4972 keys, 1906 commit groups, 1.0 writes per commit group, ingest: 2.41 MB, 0.00 MB/s#012Interval WAL: 1906 writes, 803 syncs, 2.37 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 13:50:41 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1085: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:50:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:50:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:50:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:50:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:50:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:50:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 13:50:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:50:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 13:50:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:50:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:50:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:50:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 13:50:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:50:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:50:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:50:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:50:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:50:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:50:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:50:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:50:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:50:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:50:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:50:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:50:43 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:50:45 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1087: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:50:47 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1088: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:50:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:50:47 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:50:47 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 7658 writes, 29K keys, 7658 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7658 writes, 1723 syncs, 4.44 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1996 writes, 5287 keys, 1996 commit groups, 1.0 writes per commit group, ingest: 2.75 MB, 0.00 MB/s#012Interval WAL: 1996 writes, 864 syncs, 2.31 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 13:50:49 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1089: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:50:50 np0005533938 ceph-mgr[75218]: [devicehealth INFO root] Check health
Nov 24 13:50:51 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:50:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:50:53 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1091: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:50:55 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:50:57 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1093: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:50:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:50:59 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1094: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:01 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1095: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:02 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:51:03 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:51:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:51:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:51:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:51:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:51:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:51:05 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:07 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1098: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:07 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:51:07 np0005533938 podman[279596]: 2025-11-24 18:51:07.976278292 +0000 UTC m=+0.076632844 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 24 13:51:08 np0005533938 podman[279622]: 2025-11-24 18:51:08.049883411 +0000 UTC m=+0.047199191 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 24 13:51:08 np0005533938 podman[279623]: 2025-11-24 18:51:08.121379328 +0000 UTC m=+0.101491185 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 24 13:51:09 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1099: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:11 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1100: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:12 np0005533938 nova_compute[270693]: 2025-11-24 18:51:12.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:51:12 np0005533938 nova_compute[270693]: 2025-11-24 18:51:12.529 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 13:51:12 np0005533938 nova_compute[270693]: 2025-11-24 18:51:12.529 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 13:51:12 np0005533938 nova_compute[270693]: 2025-11-24 18:51:12.542 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 13:51:12 np0005533938 nova_compute[270693]: 2025-11-24 18:51:12.543 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:51:12 np0005533938 nova_compute[270693]: 2025-11-24 18:51:12.565 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:51:12 np0005533938 nova_compute[270693]: 2025-11-24 18:51:12.565 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:51:12 np0005533938 nova_compute[270693]: 2025-11-24 18:51:12.565 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:51:12 np0005533938 nova_compute[270693]: 2025-11-24 18:51:12.566 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 13:51:12 np0005533938 nova_compute[270693]: 2025-11-24 18:51:12.566 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:51:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:51:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:51:12 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1704332064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:51:12 np0005533938 nova_compute[270693]: 2025-11-24 18:51:12.975 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:51:13 np0005533938 nova_compute[270693]: 2025-11-24 18:51:13.129 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 13:51:13 np0005533938 nova_compute[270693]: 2025-11-24 18:51:13.130 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5078MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 13:51:13 np0005533938 nova_compute[270693]: 2025-11-24 18:51:13.130 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:51:13 np0005533938 nova_compute[270693]: 2025-11-24 18:51:13.131 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:51:13 np0005533938 nova_compute[270693]: 2025-11-24 18:51:13.209 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 13:51:13 np0005533938 nova_compute[270693]: 2025-11-24 18:51:13.209 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 13:51:13 np0005533938 nova_compute[270693]: 2025-11-24 18:51:13.232 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:51:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:51:13 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2446840728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:51:13 np0005533938 nova_compute[270693]: 2025-11-24 18:51:13.663 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:51:13 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1101: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:13 np0005533938 nova_compute[270693]: 2025-11-24 18:51:13.669 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 13:51:13 np0005533938 nova_compute[270693]: 2025-11-24 18:51:13.687 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 13:51:13 np0005533938 nova_compute[270693]: 2025-11-24 18:51:13.688 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 13:51:13 np0005533938 nova_compute[270693]: 2025-11-24 18:51:13.688 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:51:14 np0005533938 nova_compute[270693]: 2025-11-24 18:51:14.675 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:51:14 np0005533938 nova_compute[270693]: 2025-11-24 18:51:14.675 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:51:14 np0005533938 nova_compute[270693]: 2025-11-24 18:51:14.696 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:51:14 np0005533938 nova_compute[270693]: 2025-11-24 18:51:14.697 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:51:14 np0005533938 nova_compute[270693]: 2025-11-24 18:51:14.697 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 13:51:15 np0005533938 nova_compute[270693]: 2025-11-24 18:51:15.530 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:51:15 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:16 np0005533938 podman[279979]: 2025-11-24 18:51:16.005973855 +0000 UTC m=+0.071860357 container create 93d7c9a76e7a5c1e0406da46098d4fd2612476dd0332a1654cdc93981b2c85b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_buck, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 24 13:51:16 np0005533938 systemd[1]: Started libpod-conmon-93d7c9a76e7a5c1e0406da46098d4fd2612476dd0332a1654cdc93981b2c85b8.scope.
Nov 24 13:51:16 np0005533938 podman[279979]: 2025-11-24 18:51:15.975534547 +0000 UTC m=+0.041421109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:51:16 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:51:16 np0005533938 podman[279979]: 2025-11-24 18:51:16.099987735 +0000 UTC m=+0.165874237 container init 93d7c9a76e7a5c1e0406da46098d4fd2612476dd0332a1654cdc93981b2c85b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:51:16 np0005533938 podman[279979]: 2025-11-24 18:51:16.107719765 +0000 UTC m=+0.173606237 container start 93d7c9a76e7a5c1e0406da46098d4fd2612476dd0332a1654cdc93981b2c85b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_buck, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:51:16 np0005533938 podman[279979]: 2025-11-24 18:51:16.111231951 +0000 UTC m=+0.177118423 container attach 93d7c9a76e7a5c1e0406da46098d4fd2612476dd0332a1654cdc93981b2c85b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:51:16 np0005533938 awesome_buck[279995]: 167 167
Nov 24 13:51:16 np0005533938 systemd[1]: libpod-93d7c9a76e7a5c1e0406da46098d4fd2612476dd0332a1654cdc93981b2c85b8.scope: Deactivated successfully.
Nov 24 13:51:16 np0005533938 podman[279979]: 2025-11-24 18:51:16.117596837 +0000 UTC m=+0.183483339 container died 93d7c9a76e7a5c1e0406da46098d4fd2612476dd0332a1654cdc93981b2c85b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_buck, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 13:51:16 np0005533938 systemd[1]: var-lib-containers-storage-overlay-ef0c84802b238213d6085ab9eb80ec4376933e23cadd268316e5212a66b66fd0-merged.mount: Deactivated successfully.
Nov 24 13:51:16 np0005533938 podman[279979]: 2025-11-24 18:51:16.174229899 +0000 UTC m=+0.240116401 container remove 93d7c9a76e7a5c1e0406da46098d4fd2612476dd0332a1654cdc93981b2c85b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_buck, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:51:16 np0005533938 systemd[1]: libpod-conmon-93d7c9a76e7a5c1e0406da46098d4fd2612476dd0332a1654cdc93981b2c85b8.scope: Deactivated successfully.
Nov 24 13:51:16 np0005533938 podman[280018]: 2025-11-24 18:51:16.413188211 +0000 UTC m=+0.072758419 container create 17e266acf06dfd8914a2e8bd6098dfb5bccda79b26246815a3ac1f51fd73dbb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:51:16 np0005533938 systemd[1]: Started libpod-conmon-17e266acf06dfd8914a2e8bd6098dfb5bccda79b26246815a3ac1f51fd73dbb0.scope.
Nov 24 13:51:16 np0005533938 podman[280018]: 2025-11-24 18:51:16.384879625 +0000 UTC m=+0.044449913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:51:16 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:51:16 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e756399f52dc5572c3c388b63749fc1133525b7297ec993a57d035de05f4c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:51:16 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e756399f52dc5572c3c388b63749fc1133525b7297ec993a57d035de05f4c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:51:16 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e756399f52dc5572c3c388b63749fc1133525b7297ec993a57d035de05f4c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:51:16 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e756399f52dc5572c3c388b63749fc1133525b7297ec993a57d035de05f4c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:51:16 np0005533938 nova_compute[270693]: 2025-11-24 18:51:16.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:51:16 np0005533938 nova_compute[270693]: 2025-11-24 18:51:16.530 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:51:16 np0005533938 podman[280018]: 2025-11-24 18:51:16.5356577 +0000 UTC m=+0.195227888 container init 17e266acf06dfd8914a2e8bd6098dfb5bccda79b26246815a3ac1f51fd73dbb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 13:51:16 np0005533938 podman[280018]: 2025-11-24 18:51:16.548469395 +0000 UTC m=+0.208039613 container start 17e266acf06dfd8914a2e8bd6098dfb5bccda79b26246815a3ac1f51fd73dbb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:51:16 np0005533938 podman[280018]: 2025-11-24 18:51:16.553996221 +0000 UTC m=+0.213566429 container attach 17e266acf06dfd8914a2e8bd6098dfb5bccda79b26246815a3ac1f51fd73dbb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 13:51:17 np0005533938 nova_compute[270693]: 2025-11-24 18:51:17.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:51:17 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1103: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]: [
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:    {
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:        "available": false,
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:        "ceph_device": false,
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:        "lsm_data": {},
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:        "lvs": [],
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:        "path": "/dev/sr0",
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:        "rejected_reasons": [
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "Has a FileSystem",
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "Insufficient space (<5GB)"
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:        ],
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:        "sys_api": {
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "actuators": null,
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "device_nodes": "sr0",
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "devname": "sr0",
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "human_readable_size": "482.00 KB",
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "id_bus": "ata",
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "model": "QEMU DVD-ROM",
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "nr_requests": "2",
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "parent": "/dev/sr0",
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "partitions": {},
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "path": "/dev/sr0",
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "removable": "1",
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "rev": "2.5+",
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "ro": "0",
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "rotational": "1",
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "sas_address": "",
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "sas_device_handle": "",
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "scheduler_mode": "mq-deadline",
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "sectors": 0,
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "sectorsize": "2048",
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "size": 493568.0,
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "support_discard": "2048",
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "type": "disk",
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:            "vendor": "QEMU"
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:        }
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]:    }
Nov 24 13:51:18 np0005533938 pedantic_roentgen[280034]: ]
Nov 24 13:51:18 np0005533938 systemd[1]: libpod-17e266acf06dfd8914a2e8bd6098dfb5bccda79b26246815a3ac1f51fd73dbb0.scope: Deactivated successfully.
Nov 24 13:51:18 np0005533938 podman[280018]: 2025-11-24 18:51:18.130812455 +0000 UTC m=+1.790382623 container died 17e266acf06dfd8914a2e8bd6098dfb5bccda79b26246815a3ac1f51fd73dbb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:51:18 np0005533938 systemd[1]: libpod-17e266acf06dfd8914a2e8bd6098dfb5bccda79b26246815a3ac1f51fd73dbb0.scope: Consumed 1.635s CPU time.
Nov 24 13:51:18 np0005533938 systemd[1]: var-lib-containers-storage-overlay-60e756399f52dc5572c3c388b63749fc1133525b7297ec993a57d035de05f4c4-merged.mount: Deactivated successfully.
Nov 24 13:51:18 np0005533938 podman[280018]: 2025-11-24 18:51:18.180880856 +0000 UTC m=+1.840451024 container remove 17e266acf06dfd8914a2e8bd6098dfb5bccda79b26246815a3ac1f51fd73dbb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:51:18 np0005533938 systemd[1]: libpod-conmon-17e266acf06dfd8914a2e8bd6098dfb5bccda79b26246815a3ac1f51fd73dbb0.scope: Deactivated successfully.
Nov 24 13:51:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:51:18 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:51:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:51:18 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:51:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 24 13:51:18 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 13:51:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:51:18 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:51:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:51:18 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:51:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:51:18 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:51:18 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 31c098d9-8c9d-4d64-a5d1-e75c3c6d16f0 does not exist
Nov 24 13:51:18 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 1058b7ed-47b5-45c2-a1c3-845433273a2d does not exist
Nov 24 13:51:18 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev fa19832c-bf97-4640-8cd2-c3e53d849d34 does not exist
Nov 24 13:51:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:51:18 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:51:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:51:18 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:51:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:51:18 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:51:18 np0005533938 podman[282214]: 2025-11-24 18:51:18.797643151 +0000 UTC m=+0.033189647 container create 0431124be64285e982ea8595a79e6a473ec61dbaede7bc28925adbd3fe8a4853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 13:51:18 np0005533938 systemd[1]: Started libpod-conmon-0431124be64285e982ea8595a79e6a473ec61dbaede7bc28925adbd3fe8a4853.scope.
Nov 24 13:51:18 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:51:18 np0005533938 podman[282214]: 2025-11-24 18:51:18.783683128 +0000 UTC m=+0.019229644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:51:18 np0005533938 podman[282214]: 2025-11-24 18:51:18.880705312 +0000 UTC m=+0.116251818 container init 0431124be64285e982ea8595a79e6a473ec61dbaede7bc28925adbd3fe8a4853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:51:18 np0005533938 podman[282214]: 2025-11-24 18:51:18.890482472 +0000 UTC m=+0.126028958 container start 0431124be64285e982ea8595a79e6a473ec61dbaede7bc28925adbd3fe8a4853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 13:51:18 np0005533938 podman[282214]: 2025-11-24 18:51:18.893512526 +0000 UTC m=+0.129059032 container attach 0431124be64285e982ea8595a79e6a473ec61dbaede7bc28925adbd3fe8a4853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 24 13:51:18 np0005533938 goofy_lederberg[282230]: 167 167
Nov 24 13:51:18 np0005533938 systemd[1]: libpod-0431124be64285e982ea8595a79e6a473ec61dbaede7bc28925adbd3fe8a4853.scope: Deactivated successfully.
Nov 24 13:51:18 np0005533938 podman[282214]: 2025-11-24 18:51:18.897986346 +0000 UTC m=+0.133532832 container died 0431124be64285e982ea8595a79e6a473ec61dbaede7bc28925adbd3fe8a4853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 13:51:18 np0005533938 systemd[1]: var-lib-containers-storage-overlay-a204410acf5033ac86def89c76b65a619a384ad96b735e8710cef799b54f6693-merged.mount: Deactivated successfully.
Nov 24 13:51:18 np0005533938 podman[282214]: 2025-11-24 18:51:18.935203401 +0000 UTC m=+0.170749897 container remove 0431124be64285e982ea8595a79e6a473ec61dbaede7bc28925adbd3fe8a4853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lederberg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 24 13:51:18 np0005533938 systemd[1]: libpod-conmon-0431124be64285e982ea8595a79e6a473ec61dbaede7bc28925adbd3fe8a4853.scope: Deactivated successfully.
Nov 24 13:51:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 13:51:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/55251396' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 13:51:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 13:51:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/55251396' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 13:51:19 np0005533938 podman[282253]: 2025-11-24 18:51:19.081856244 +0000 UTC m=+0.036430586 container create 58d69940405dfc52e65d32f3b2555e03f80b22b54e3977b5f027994061de1cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:51:19 np0005533938 systemd[1]: Started libpod-conmon-58d69940405dfc52e65d32f3b2555e03f80b22b54e3977b5f027994061de1cab.scope.
Nov 24 13:51:19 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:51:19 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee045df71c7cf1bae164319a3f5fc18bed9c36eb2af2d03be2a140509af04bcc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:51:19 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee045df71c7cf1bae164319a3f5fc18bed9c36eb2af2d03be2a140509af04bcc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:51:19 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee045df71c7cf1bae164319a3f5fc18bed9c36eb2af2d03be2a140509af04bcc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:51:19 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee045df71c7cf1bae164319a3f5fc18bed9c36eb2af2d03be2a140509af04bcc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:51:19 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee045df71c7cf1bae164319a3f5fc18bed9c36eb2af2d03be2a140509af04bcc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:51:19 np0005533938 podman[282253]: 2025-11-24 18:51:19.066407425 +0000 UTC m=+0.020981777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:51:19 np0005533938 podman[282253]: 2025-11-24 18:51:19.169824156 +0000 UTC m=+0.124398548 container init 58d69940405dfc52e65d32f3b2555e03f80b22b54e3977b5f027994061de1cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dewdney, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:51:19 np0005533938 podman[282253]: 2025-11-24 18:51:19.175868505 +0000 UTC m=+0.130442877 container start 58d69940405dfc52e65d32f3b2555e03f80b22b54e3977b5f027994061de1cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dewdney, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:51:19 np0005533938 podman[282253]: 2025-11-24 18:51:19.180542749 +0000 UTC m=+0.135117121 container attach 58d69940405dfc52e65d32f3b2555e03f80b22b54e3977b5f027994061de1cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 24 13:51:19 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:51:19 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:51:19 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 24 13:51:19 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:51:19 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:51:19 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:51:19 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1104: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:20 np0005533938 adoring_dewdney[282270]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:51:20 np0005533938 adoring_dewdney[282270]: --> relative data size: 1.0
Nov 24 13:51:20 np0005533938 adoring_dewdney[282270]: --> All data devices are unavailable
Nov 24 13:51:20 np0005533938 systemd[1]: libpod-58d69940405dfc52e65d32f3b2555e03f80b22b54e3977b5f027994061de1cab.scope: Deactivated successfully.
Nov 24 13:51:20 np0005533938 systemd[1]: libpod-58d69940405dfc52e65d32f3b2555e03f80b22b54e3977b5f027994061de1cab.scope: Consumed 1.122s CPU time.
Nov 24 13:51:20 np0005533938 podman[282253]: 2025-11-24 18:51:20.357490919 +0000 UTC m=+1.312065271 container died 58d69940405dfc52e65d32f3b2555e03f80b22b54e3977b5f027994061de1cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:51:20 np0005533938 systemd[1]: var-lib-containers-storage-overlay-ee045df71c7cf1bae164319a3f5fc18bed9c36eb2af2d03be2a140509af04bcc-merged.mount: Deactivated successfully.
Nov 24 13:51:20 np0005533938 podman[282253]: 2025-11-24 18:51:20.403039209 +0000 UTC m=+1.357613541 container remove 58d69940405dfc52e65d32f3b2555e03f80b22b54e3977b5f027994061de1cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 13:51:20 np0005533938 systemd[1]: libpod-conmon-58d69940405dfc52e65d32f3b2555e03f80b22b54e3977b5f027994061de1cab.scope: Deactivated successfully.
Nov 24 13:51:21 np0005533938 podman[282451]: 2025-11-24 18:51:21.171569743 +0000 UTC m=+0.079819712 container create 5dfaa8ed454a6a33d1d38bdd8858db4285d2085b6e7dd25e3764a36473d70373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 13:51:21 np0005533938 systemd[1]: Started libpod-conmon-5dfaa8ed454a6a33d1d38bdd8858db4285d2085b6e7dd25e3764a36473d70373.scope.
Nov 24 13:51:21 np0005533938 podman[282451]: 2025-11-24 18:51:21.142797336 +0000 UTC m=+0.051047365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:51:21 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:51:21 np0005533938 podman[282451]: 2025-11-24 18:51:21.258413957 +0000 UTC m=+0.166663956 container init 5dfaa8ed454a6a33d1d38bdd8858db4285d2085b6e7dd25e3764a36473d70373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_elbakyan, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:51:21 np0005533938 podman[282451]: 2025-11-24 18:51:21.265739857 +0000 UTC m=+0.173989806 container start 5dfaa8ed454a6a33d1d38bdd8858db4285d2085b6e7dd25e3764a36473d70373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_elbakyan, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 13:51:21 np0005533938 podman[282451]: 2025-11-24 18:51:21.26951941 +0000 UTC m=+0.177769379 container attach 5dfaa8ed454a6a33d1d38bdd8858db4285d2085b6e7dd25e3764a36473d70373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 13:51:21 np0005533938 exciting_elbakyan[282467]: 167 167
Nov 24 13:51:21 np0005533938 systemd[1]: libpod-5dfaa8ed454a6a33d1d38bdd8858db4285d2085b6e7dd25e3764a36473d70373.scope: Deactivated successfully.
Nov 24 13:51:21 np0005533938 podman[282451]: 2025-11-24 18:51:21.271866587 +0000 UTC m=+0.180116526 container died 5dfaa8ed454a6a33d1d38bdd8858db4285d2085b6e7dd25e3764a36473d70373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:51:21 np0005533938 systemd[1]: var-lib-containers-storage-overlay-8316b3db69730bc998ad46820752cbe0c93a6164474503742ef1d673a5fbfede-merged.mount: Deactivated successfully.
Nov 24 13:51:21 np0005533938 podman[282451]: 2025-11-24 18:51:21.311408019 +0000 UTC m=+0.219657988 container remove 5dfaa8ed454a6a33d1d38bdd8858db4285d2085b6e7dd25e3764a36473d70373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_elbakyan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 24 13:51:21 np0005533938 systemd[1]: libpod-conmon-5dfaa8ed454a6a33d1d38bdd8858db4285d2085b6e7dd25e3764a36473d70373.scope: Deactivated successfully.
Nov 24 13:51:21 np0005533938 podman[282492]: 2025-11-24 18:51:21.524460143 +0000 UTC m=+0.041851669 container create cedf4871345cb2c13cd9b27ec618b471943642448ecb920721b909336ce8dd16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 13:51:21 np0005533938 systemd[1]: Started libpod-conmon-cedf4871345cb2c13cd9b27ec618b471943642448ecb920721b909336ce8dd16.scope.
Nov 24 13:51:21 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:51:21 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a52bb2514e2be603a824d7cb00d8eff7d35d954ff9cc4dddef572ee3d0df3e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:51:21 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a52bb2514e2be603a824d7cb00d8eff7d35d954ff9cc4dddef572ee3d0df3e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:51:21 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a52bb2514e2be603a824d7cb00d8eff7d35d954ff9cc4dddef572ee3d0df3e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:51:21 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a52bb2514e2be603a824d7cb00d8eff7d35d954ff9cc4dddef572ee3d0df3e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:51:21 np0005533938 podman[282492]: 2025-11-24 18:51:21.504719268 +0000 UTC m=+0.022110764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:51:21 np0005533938 podman[282492]: 2025-11-24 18:51:21.604673024 +0000 UTC m=+0.122064540 container init cedf4871345cb2c13cd9b27ec618b471943642448ecb920721b909336ce8dd16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:51:21 np0005533938 podman[282492]: 2025-11-24 18:51:21.611571444 +0000 UTC m=+0.128962930 container start cedf4871345cb2c13cd9b27ec618b471943642448ecb920721b909336ce8dd16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:51:21 np0005533938 podman[282492]: 2025-11-24 18:51:21.615717056 +0000 UTC m=+0.133108552 container attach cedf4871345cb2c13cd9b27ec618b471943642448ecb920721b909336ce8dd16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:51:21 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:22 np0005533938 charming_davinci[282507]: {
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:    "0": [
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:        {
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "devices": [
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "/dev/loop3"
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            ],
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "lv_name": "ceph_lv0",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "lv_size": "21470642176",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "name": "ceph_lv0",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "tags": {
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.cluster_name": "ceph",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.crush_device_class": "",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.encrypted": "0",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.osd_id": "0",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.type": "block",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.vdo": "0"
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            },
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "type": "block",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "vg_name": "ceph_vg0"
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:        }
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:    ],
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:    "1": [
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:        {
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "devices": [
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "/dev/loop4"
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            ],
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "lv_name": "ceph_lv1",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "lv_size": "21470642176",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "name": "ceph_lv1",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "tags": {
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.cluster_name": "ceph",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.crush_device_class": "",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.encrypted": "0",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.osd_id": "1",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.type": "block",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.vdo": "0"
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            },
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "type": "block",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "vg_name": "ceph_vg1"
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:        }
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:    ],
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:    "2": [
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:        {
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "devices": [
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "/dev/loop5"
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            ],
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "lv_name": "ceph_lv2",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "lv_size": "21470642176",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "name": "ceph_lv2",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "tags": {
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.cluster_name": "ceph",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.crush_device_class": "",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.encrypted": "0",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.osd_id": "2",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.type": "block",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:                "ceph.vdo": "0"
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            },
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "type": "block",
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:            "vg_name": "ceph_vg2"
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:        }
Nov 24 13:51:22 np0005533938 charming_davinci[282507]:    ]
Nov 24 13:51:22 np0005533938 charming_davinci[282507]: }
Nov 24 13:51:22 np0005533938 systemd[1]: libpod-cedf4871345cb2c13cd9b27ec618b471943642448ecb920721b909336ce8dd16.scope: Deactivated successfully.
Nov 24 13:51:22 np0005533938 podman[282492]: 2025-11-24 18:51:22.415148859 +0000 UTC m=+0.932540365 container died cedf4871345cb2c13cd9b27ec618b471943642448ecb920721b909336ce8dd16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:51:22 np0005533938 systemd[1]: var-lib-containers-storage-overlay-0a52bb2514e2be603a824d7cb00d8eff7d35d954ff9cc4dddef572ee3d0df3e1-merged.mount: Deactivated successfully.
Nov 24 13:51:22 np0005533938 podman[282492]: 2025-11-24 18:51:22.47824547 +0000 UTC m=+0.995636956 container remove cedf4871345cb2c13cd9b27ec618b471943642448ecb920721b909336ce8dd16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:51:22 np0005533938 systemd[1]: libpod-conmon-cedf4871345cb2c13cd9b27ec618b471943642448ecb920721b909336ce8dd16.scope: Deactivated successfully.
Nov 24 13:51:22 np0005533938 systemd-logind[822]: New session 54 of user zuul.
Nov 24 13:51:22 np0005533938 systemd[1]: Started Session 54 of User zuul.
Nov 24 13:51:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:51:22.745 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:51:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:51:22.747 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:51:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:51:22.747 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:51:22 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:51:23 np0005533938 podman[282708]: 2025-11-24 18:51:23.178622889 +0000 UTC m=+0.043341256 container create bd5265ac06f8f751ea36301be0fbfa457c94d473b027d5aac629dbece2389e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 13:51:23 np0005533938 podman[282708]: 2025-11-24 18:51:23.153739348 +0000 UTC m=+0.018457735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:51:23 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1106: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:23 np0005533938 systemd[1]: Started libpod-conmon-bd5265ac06f8f751ea36301be0fbfa457c94d473b027d5aac629dbece2389e00.scope.
Nov 24 13:51:23 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:51:23 np0005533938 podman[282708]: 2025-11-24 18:51:23.877488442 +0000 UTC m=+0.742206829 container init bd5265ac06f8f751ea36301be0fbfa457c94d473b027d5aac629dbece2389e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_bohr, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:51:23 np0005533938 podman[282708]: 2025-11-24 18:51:23.883539521 +0000 UTC m=+0.748257888 container start bd5265ac06f8f751ea36301be0fbfa457c94d473b027d5aac629dbece2389e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_bohr, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 13:51:23 np0005533938 podman[282708]: 2025-11-24 18:51:23.886890363 +0000 UTC m=+0.751608750 container attach bd5265ac06f8f751ea36301be0fbfa457c94d473b027d5aac629dbece2389e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_bohr, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 13:51:23 np0005533938 upbeat_bohr[282725]: 167 167
Nov 24 13:51:23 np0005533938 systemd[1]: libpod-bd5265ac06f8f751ea36301be0fbfa457c94d473b027d5aac629dbece2389e00.scope: Deactivated successfully.
Nov 24 13:51:23 np0005533938 podman[282708]: 2025-11-24 18:51:23.889567429 +0000 UTC m=+0.754285796 container died bd5265ac06f8f751ea36301be0fbfa457c94d473b027d5aac629dbece2389e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 13:51:23 np0005533938 systemd[1]: var-lib-containers-storage-overlay-5a27363e0b0d474f3eca9390dc98774ecbedc351c5c9ad2ec596668c4372b694-merged.mount: Deactivated successfully.
Nov 24 13:51:23 np0005533938 podman[282708]: 2025-11-24 18:51:23.925879041 +0000 UTC m=+0.790597408 container remove bd5265ac06f8f751ea36301be0fbfa457c94d473b027d5aac629dbece2389e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:51:23 np0005533938 systemd[1]: libpod-conmon-bd5265ac06f8f751ea36301be0fbfa457c94d473b027d5aac629dbece2389e00.scope: Deactivated successfully.
Nov 24 13:51:24 np0005533938 podman[282776]: 2025-11-24 18:51:24.065055091 +0000 UTC m=+0.036351664 container create 5979caf9742fabcf401641f5947071648f37d2436d1f7d13111484f2f9356ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 13:51:24 np0005533938 systemd[1]: Started libpod-conmon-5979caf9742fabcf401641f5947071648f37d2436d1f7d13111484f2f9356ded.scope.
Nov 24 13:51:24 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:51:24 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a2517146b6d3cb2433e1f7176cbae3082be80b970e50c5b2b454a76cb1eedd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:51:24 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a2517146b6d3cb2433e1f7176cbae3082be80b970e50c5b2b454a76cb1eedd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:51:24 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a2517146b6d3cb2433e1f7176cbae3082be80b970e50c5b2b454a76cb1eedd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:51:24 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a2517146b6d3cb2433e1f7176cbae3082be80b970e50c5b2b454a76cb1eedd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:51:24 np0005533938 podman[282776]: 2025-11-24 18:51:24.050237327 +0000 UTC m=+0.021533930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:51:24 np0005533938 podman[282776]: 2025-11-24 18:51:24.14846394 +0000 UTC m=+0.119760533 container init 5979caf9742fabcf401641f5947071648f37d2436d1f7d13111484f2f9356ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:51:24 np0005533938 podman[282776]: 2025-11-24 18:51:24.160079966 +0000 UTC m=+0.131376559 container start 5979caf9742fabcf401641f5947071648f37d2436d1f7d13111484f2f9356ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:51:24 np0005533938 podman[282776]: 2025-11-24 18:51:24.164038543 +0000 UTC m=+0.135335126 container attach 5979caf9742fabcf401641f5947071648f37d2436d1f7d13111484f2f9356ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 13:51:25 np0005533938 trusting_satoshi[282804]: {
Nov 24 13:51:25 np0005533938 trusting_satoshi[282804]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:51:25 np0005533938 trusting_satoshi[282804]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:51:25 np0005533938 trusting_satoshi[282804]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:51:25 np0005533938 trusting_satoshi[282804]:        "osd_id": 0,
Nov 24 13:51:25 np0005533938 trusting_satoshi[282804]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:51:25 np0005533938 trusting_satoshi[282804]:        "type": "bluestore"
Nov 24 13:51:25 np0005533938 trusting_satoshi[282804]:    },
Nov 24 13:51:25 np0005533938 trusting_satoshi[282804]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:51:25 np0005533938 trusting_satoshi[282804]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:51:25 np0005533938 trusting_satoshi[282804]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:51:25 np0005533938 trusting_satoshi[282804]:        "osd_id": 1,
Nov 24 13:51:25 np0005533938 trusting_satoshi[282804]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:51:25 np0005533938 trusting_satoshi[282804]:        "type": "bluestore"
Nov 24 13:51:25 np0005533938 trusting_satoshi[282804]:    },
Nov 24 13:51:25 np0005533938 trusting_satoshi[282804]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:51:25 np0005533938 trusting_satoshi[282804]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:51:25 np0005533938 trusting_satoshi[282804]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:51:25 np0005533938 trusting_satoshi[282804]:        "osd_id": 2,
Nov 24 13:51:25 np0005533938 trusting_satoshi[282804]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:51:25 np0005533938 trusting_satoshi[282804]:        "type": "bluestore"
Nov 24 13:51:25 np0005533938 trusting_satoshi[282804]:    }
Nov 24 13:51:25 np0005533938 trusting_satoshi[282804]: }
Nov 24 13:51:25 np0005533938 systemd[1]: libpod-5979caf9742fabcf401641f5947071648f37d2436d1f7d13111484f2f9356ded.scope: Deactivated successfully.
Nov 24 13:51:25 np0005533938 podman[282776]: 2025-11-24 18:51:25.150266896 +0000 UTC m=+1.121563479 container died 5979caf9742fabcf401641f5947071648f37d2436d1f7d13111484f2f9356ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:51:25 np0005533938 systemd[1]: var-lib-containers-storage-overlay-c5a2517146b6d3cb2433e1f7176cbae3082be80b970e50c5b2b454a76cb1eedd-merged.mount: Deactivated successfully.
Nov 24 13:51:25 np0005533938 podman[282776]: 2025-11-24 18:51:25.204867787 +0000 UTC m=+1.176164390 container remove 5979caf9742fabcf401641f5947071648f37d2436d1f7d13111484f2f9356ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:51:25 np0005533938 systemd[1]: libpod-conmon-5979caf9742fabcf401641f5947071648f37d2436d1f7d13111484f2f9356ded.scope: Deactivated successfully.
Nov 24 13:51:25 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:51:25 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:51:25 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:51:25 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:51:25 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 2b8361f4-e302-40fe-b949-d2ce88bab0cc does not exist
Nov 24 13:51:25 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 5fa2154a-deb7-45cf-81cd-15b6bdcffeed does not exist
Nov 24 13:51:25 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:51:25 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:51:25 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14743 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:51:25 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1107: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:26 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14745 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:51:26 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 24 13:51:26 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3702689370' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 13:51:27 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1108: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:27 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:51:29 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1109: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:31 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1110: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:32 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:51:33 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1111: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:51:34
Nov 24 13:51:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:51:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:51:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'vms']
Nov 24 13:51:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:51:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:51:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:51:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:51:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:51:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:51:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:51:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:51:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:51:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:51:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:51:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:51:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:51:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:51:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:51:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:51:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:51:35 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1112: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:37 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1113: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:37 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:51:39 np0005533938 podman[283147]: 2025-11-24 18:51:39.01144895 +0000 UTC m=+0.086262001 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Nov 24 13:51:39 np0005533938 podman[283145]: 2025-11-24 18:51:39.018723439 +0000 UTC m=+0.103359031 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 24 13:51:39 np0005533938 podman[283146]: 2025-11-24 18:51:39.018884353 +0000 UTC m=+0.104225872 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 24 13:51:39 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:41 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1115: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:42 np0005533938 ovs-vsctl[283239]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 24 13:51:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:51:43 np0005533938 virtqemud[270425]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 24 13:51:43 np0005533938 virtqemud[270425]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 24 13:51:43 np0005533938 virtqemud[270425]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 24 13:51:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:51:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:51:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:51:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:51:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 13:51:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:51:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 13:51:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:51:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:51:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:51:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 13:51:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:51:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:51:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:51:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:51:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:51:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:51:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:51:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:51:43 np0005533938 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 13:51:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:51:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:51:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:51:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:51:43 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: cache status {prefix=cache status} (starting...)
Nov 24 13:51:43 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1116: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:43 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: client ls {prefix=client ls} (starting...)
Nov 24 13:51:43 np0005533938 lvm[283599]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 24 13:51:43 np0005533938 lvm[283599]: VG ceph_vg2 finished
Nov 24 13:51:44 np0005533938 lvm[283606]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 24 13:51:44 np0005533938 lvm[283606]: VG ceph_vg1 finished
Nov 24 13:51:44 np0005533938 lvm[283609]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 13:51:44 np0005533938 lvm[283609]: VG ceph_vg0 finished
Nov 24 13:51:44 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14749 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:51:44 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: damage ls {prefix=damage ls} (starting...)
Nov 24 13:51:44 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14751 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:51:44 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: dump loads {prefix=dump loads} (starting...)
Nov 24 13:51:44 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 24 13:51:44 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 24 13:51:44 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 24 13:51:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 24 13:51:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4069699190' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 13:51:45 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 24 13:51:45 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14757 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:51:45 np0005533938 ceph-mgr[75218]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 24 13:51:45 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:51:45.220+0000 7f6377bb5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 24 13:51:45 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 24 13:51:45 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:51:45 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2916895032' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:51:45 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 24 13:51:45 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 24 13:51:45 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2556796055' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 13:51:45 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: ops {prefix=ops} (starting...)
Nov 24 13:51:45 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:45 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 24 13:51:45 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4145218467' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 13:51:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 24 13:51:46 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2849952787' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 13:51:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 24 13:51:46 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3836729523' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 13:51:46 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: session ls {prefix=session ls} (starting...)
Nov 24 13:51:46 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: status {prefix=status} (starting...)
Nov 24 13:51:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 24 13:51:46 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1898588925' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 13:51:46 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14771 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:51:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 24 13:51:46 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2764268314' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 13:51:46 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14775 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:51:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 24 13:51:47 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2393344985' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 13:51:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 24 13:51:47 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1203941721' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 13:51:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 24 13:51:47 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3593666717' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 13:51:47 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1118: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 24 13:51:47 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1770084270' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 13:51:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:51:48 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14787 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:51:48 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:51:48.080+0000 7f6377bb5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 24 13:51:48 np0005533938 ceph-mgr[75218]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 24 13:51:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 24 13:51:48 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3023611235' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 13:51:48 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14791 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:51:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 24 13:51:48 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/935284366' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 13:51:48 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14793 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:51:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 24 13:51:48 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2678684416' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 13:51:49 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14797 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:51:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 24 13:51:49 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1580327653' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 13:51:49 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14801 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:51:49 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:49 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 24 13:51:49 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1646305784' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002156 3 0.000070
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 1736704 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 78 handle_osd_map epochs [78,79], i have 78, src has [1,79]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.090582848s of 10.453024864s, submitted: 149
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004428 2 0.000110
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006589 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004346 2 0.000082
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006672 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=78/79 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=78/79 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 79 handle_osd_map epochs [79,79], i have 79, src has [1,79]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 1728512 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=78/79 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=78/79 n=5 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005481 4 0.000244
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=78/79 n=5 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=78/79 n=5 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=78/79 n=5 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=78/79 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=78/79 n=6 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009344 4 0.000268
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=78/79 n=6 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=78/79 n=6 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=78/79 n=6 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 1835008 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 1835008 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 79 heartbeat osd_stat(store_statfs(0x4fe0e5000/0x0/0x4ffc00000, data 0x6672c/0xe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 621572 data_alloc: 218103808 data_used: 126976
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 1818624 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 79 handle_osd_map epochs [79,80], i have 79, src has [1,80]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 80 heartbeat osd_stat(store_statfs(0x4fe0e5000/0x0/0x4ffc00000, data 0x6672c/0xe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 1802240 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 80 heartbeat osd_stat(store_statfs(0x4fe0e1000/0x0/0x4ffc00000, data 0x682a9/0xeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 1802240 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 1794048 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 80 heartbeat osd_stat(store_statfs(0x4fe0e2000/0x0/0x4ffc00000, data 0x682a9/0xeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 80 handle_osd_map epochs [81,81], i have 81, src has [1,81]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 1777664 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 631440 data_alloc: 218103808 data_used: 139264
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 1769472 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 1753088 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 81 handle_osd_map epochs [82,83], i have 81, src has [1,83]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 81 handle_osd_map epochs [82,83], i have 83, src has [1,83]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c(unlocked)] enter Initial
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=0 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000072 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=0 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000023 1 0.000039
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000191 1 0.000074
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000043 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000310 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c(unlocked)] enter Initial
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=0 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000043 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=0 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000031
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000111 1 0.000104
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000032 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000225 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 1712128 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.098665237s of 10.329547882s, submitted: 26
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.962248 2 0.000077
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.962537 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.962575 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000168 1 0.000244
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000012 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.963220 2 0.000136
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.963575 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.963618 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=0 lpr=83 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000150 1 0.000221
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000017 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: not registered w/ OSD
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 84 handle_osd_map epochs [84,84], i have 84, src has [1,84]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: not registered w/ OSD
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 1703936 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 84 heartbeat osd_stat(store_statfs(0x4fe0d1000/0x0/0x4ffc00000, data 0x70b6e/0xfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.c( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.094527 6 0.000124
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.c( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.c( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.1c( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.095100 6 0.000210
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.1c( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.1c( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: not registered w/ OSD
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: not registered w/ OSD
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.c( v 55'385 lc 55'74 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.049099 3 0.000163
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.c( v 55'385 lc 55'74 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.c( v 55'385 lc 55'74 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000076 1 0.000056
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.c( v 55'385 lc 55'74 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.117387 1 0.000067
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.1c( v 55'385 lc 55'136 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.166672 3 0.000155
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.1c( v 55'385 lc 55'136 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.1c( v 55'385 lc 55'136 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000087 1 0.000041
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.1c( v 55'385 lc 55'136 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.208135 1 0.000045
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 1646592 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.507480 1 0.000056
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.674167 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started 1.768746 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.299128 1 0.000031
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.674124 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started 1.769312 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] r=-1 lpr=84 pi=[59,84)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Reset 0.000063 1 0.000099
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Reset 0.000084 1 0.000132
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000041 1 0.000045
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000047 1 0.000049
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: merge_log_dups log.dups.size()=0olog.dups.size()=10
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=10
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000795 2 0.000037
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000820 3 0.000065
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 671797 data_alloc: 218103808 data_used: 143360
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 86 handle_osd_map epochs [86,87], i have 87, src has [1,87]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.914199 2 0.000065
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.915152 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.914854 3 0.000064
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.915746 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/59 les/c/f=87/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003535 3 0.000216
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/59 les/c/f=87/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/59 les/c/f=87/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000025 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/59 les/c/f=87/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 1630208 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=6 ec=59/49 lis/c=86/59 les/c/f=87/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.065697 3 0.000158
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=6 ec=59/49 lis/c=86/59 les/c/f=87/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=6 ec=59/49 lis/c=86/59 les/c/f=87/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=6 ec=59/49 lis/c=86/59 les/c/f=87/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 87 handle_osd_map epochs [87,87], i have 87, src has [1,87]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 1630208 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 1613824 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 1605632 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 87 heartbeat osd_stat(store_statfs(0x4fe0cd000/0x0/0x4ffc00000, data 0x7406c/0x101000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 1605632 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 674349 data_alloc: 218103808 data_used: 147456
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 1597440 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.16 deep-scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.16 deep-scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 1597440 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 1531904 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 1531904 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 1531904 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 88 heartbeat osd_stat(store_statfs(0x4fe0c9000/0x0/0x4ffc00000, data 0x75be9/0x104000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.654691696s of 12.283568382s, submitted: 60
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 682801 data_alloc: 218103808 data_used: 159744
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 1474560 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 1466368 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 1441792 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 1441792 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 90 heartbeat osd_stat(store_statfs(0x4fe0c3000/0x0/0x4ffc00000, data 0x792e3/0x10a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 1433600 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13(unlocked)] enter Initial
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=0 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000051 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=0 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000027
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000185 1 0.000045
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000043 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000240 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695051 data_alloc: 218103808 data_used: 163840
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 1433600 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 92 handle_osd_map epochs [93,93], i have 93, src has [1,93]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.007002 2 0.000066
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.007263 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.007282 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=92) [2] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000052 1 0.000077
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000005 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 1384448 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fe0b8000/0x0/0x4ffc00000, data 0x7e442/0x113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 94 pg[9.13( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.006363 6 0.000036
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 94 pg[9.13( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 94 pg[9.13( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.13] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 94 pg[9.13( v 55'385 lc 55'118 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.007743 3 0.000133
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 94 pg[9.13( v 55'385 lc 55'118 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 94 pg[9.13( v 55'385 lc 55'118 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000110 1 0.000034
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 94 pg[9.13( v 55'385 lc 55'118 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 94 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.039540 1 0.000065
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 94 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 1269760 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 94 handle_osd_map epochs [94,95], i have 94, src has [1,95]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.975169 1 0.000040
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.022681 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started 2.029080 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=93) [2]/[0] r=-1 lpr=93 pi=[67,93)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Reset 0.000099 1 0.000157
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.010563 2 0.000066
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 95 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001555 2 0.000430
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 95 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 1253376 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 95 heartbeat osd_stat(store_statfs(0x4fcf16000/0x0/0x4ffc00000, data 0x7fefc/0x117000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997622 2 0.000061
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009886 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=93/94 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=95/96 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 96 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=95/96 n=5 ec=59/49 lis/c=93/67 les/c/f=94/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=95/96 n=5 ec=59/49 lis/c=95/67 les/c/f=96/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.012413 4 0.000337
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=95/96 n=5 ec=59/49 lis/c=95/67 les/c/f=96/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=95/96 n=5 ec=59/49 lis/c=95/67 les/c/f=96/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 96 pg[9.13( v 55'385 (0'0,55'385] local-lis/les=95/96 n=5 ec=59/49 lis/c=95/67 les/c/f=96/68/0 sis=95) [2] r=0 lpr=95 pi=[67,95)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fcf12000/0x0/0x4ffc00000, data 0x81944/0x11a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 1228800 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 715559 data_alloc: 218103808 data_used: 167936
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 1163264 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fcf11000/0x0/0x4ffc00000, data 0x83377/0x11d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 1163264 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fcf11000/0x0/0x4ffc00000, data 0x83377/0x11d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.864919662s of 12.002065659s, submitted: 47
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 1155072 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 1138688 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 1138688 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=55'385 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 40.726456 70 0.000222
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active 40.733645 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary 41.747050 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=55'385 mlcod 0'0 active mbc={}] exit Started 41.747086 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=55'385 mlcod 0'0 active mbc={}] enter Reset
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99 pruub=15.274039268s) [0] r=-1 lpr=99 pi=[75,99)/1 crt=55'385 mlcod 0'0 active pruub 200.589569092s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99 pruub=15.273898125s) [0] r=-1 lpr=99 pi=[75,99)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 200.589569092s@ mbc={}] exit Reset 0.000182 1 0.000280
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99 pruub=15.273898125s) [0] r=-1 lpr=99 pi=[75,99)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 200.589569092s@ mbc={}] enter Started
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99 pruub=15.273898125s) [0] r=-1 lpr=99 pi=[75,99)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 200.589569092s@ mbc={}] enter Start
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99 pruub=15.273898125s) [0] r=-1 lpr=99 pi=[75,99)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 200.589569092s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99 pruub=15.273898125s) [0] r=-1 lpr=99 pi=[75,99)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 200.589569092s@ mbc={}] exit Start 0.000044 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 99 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99 pruub=15.273898125s) [0] r=-1 lpr=99 pi=[75,99)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 200.589569092s@ mbc={}] enter Started/Stray
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 99 handle_osd_map epochs [98,99], i have 99, src has [1,99]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 728641 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 1122304 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=-1 lpr=99 pi=[75,99)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033556 3 0.000138
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=-1 lpr=99 pi=[75,99)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.033657 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=99) [0] r=-1 lpr=99 pi=[75,99)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] exit Reset 0.000054 1 0.000086
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Start
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000025 1 0.000045
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 100 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 1089536 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 100 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999801 4 0.000060
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999900 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.013186 5 0.000261
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000084 1 0.000069
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000420 1 0.000080
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.059480 2 0.000081
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 101 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 983040 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fcf00000/0x0/0x4ffc00000, data 0x8bafa/0x12c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 102 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.933730 1 0.000084
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active 1.007280 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary 2.007204 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started 2.007239 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=100) [0]/[2] async=[0] r=0 lpr=100 pi=[75,100)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] enter Reset
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102 pruub=15.005826950s) [0] async=[0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 55'385 active pruub 203.362533569s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102 pruub=15.005747795s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 203.362533569s@ mbc={}] exit Reset 0.000124 1 0.000300
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102 pruub=15.005747795s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 203.362533569s@ mbc={}] enter Started
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102 pruub=15.005747795s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 203.362533569s@ mbc={}] enter Start
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102 pruub=15.005747795s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 203.362533569s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102 pruub=15.005747795s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 203.362533569s@ mbc={}] exit Start 0.000010 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 102 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102 pruub=15.005747795s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 203.362533569s@ mbc={}] enter Started/Stray
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 974848 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035288 7 0.000140
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000064 1 0.000092
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] lb MIN local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=-1 lpr=102 DELETING pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.035309 2 0.000224
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] lb MIN local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.035424 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 103 pg[9.16( v 55'385 (0'0,55'385] lb MIN local-lis/les=100/101 n=5 ec=59/49 lis/c=100/75 les/c/f=101/76/0 sis=102) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.070788 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 933888 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 733775 data_alloc: 218103808 data_used: 184320
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 933888 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fcefc000/0x0/0x4ffc00000, data 0x8ef2c/0x131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 933888 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fcefc000/0x0/0x4ffc00000, data 0x8ef2c/0x131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 909312 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 909312 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.501713753s of 11.634410858s, submitted: 37
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 909312 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 739363 data_alloc: 218103808 data_used: 192512
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 851968 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fcef9000/0x0/0x4ffc00000, data 0x90aa9/0x134000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 811008 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19(unlocked)] enter Initial
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=0 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000066 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=0 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000023 1 0.000046
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000074 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000115 1 0.000179
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000035 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000198 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 106 handle_osd_map epochs [106,106], i have 106, src has [1,106]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 802816 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 106 handle_osd_map epochs [106,107], i have 106, src has [1,107]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.827240 2 0.000244
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.827504 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.827624 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=106) [2] r=0 lpr=106 pi=[67,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000101 1 0.000156
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000011 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 107 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 107 heartbeat osd_stat(store_statfs(0x4fcef1000/0x0/0x4ffc00000, data 0x941a3/0x13a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 720896 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.8 deep-scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.8 deep-scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 704512 heap: 64954368 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 108 pg[9.19( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.460468 5 0.000086
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 108 pg[9.19( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 108 pg[9.19( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.19] failed. State was: not registered w/ OSD
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 108 pg[9.19( v 55'385 lc 55'61 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.006087 4 0.000195
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 108 pg[9.19( v 55'385 lc 55'61 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 108 pg[9.19( v 55'385 lc 55'61 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000128 1 0.000039
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 108 pg[9.19( v 55'385 lc 55'61 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 108 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.061598 1 0.000039
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 108 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 108 heartbeat osd_stat(store_statfs(0x4fcef0000/0x0/0x4ffc00000, data 0x95c08/0x13d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.486427 1 0.000026
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.554380 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started 2.014903 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[67,107)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Reset 0.000102 1 0.000190
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Start 0.000014 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000036 1 0.000057
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001631 3 0.000127
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000020 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 109 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 770148 data_alloc: 218103808 data_used: 192512
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 638976 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fcee7000/0x0/0x4ffc00000, data 0x9925c/0x144000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 109 handle_osd_map epochs [109,110], i have 109, src has [1,110]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 109 handle_osd_map epochs [109,110], i have 110, src has [1,110]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.016149 2 0.000213
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.017963 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=107/108 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=109/110 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=109/110 n=5 ec=59/49 lis/c=107/67 les/c/f=108/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=109/110 n=5 ec=59/49 lis/c=109/67 les/c/f=110/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003511 3 0.000119
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=109/110 n=5 ec=59/49 lis/c=109/67 les/c/f=110/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=109/110 n=5 ec=59/49 lis/c=109/67 les/c/f=110/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 110 pg[9.19( v 55'385 (0'0,55'385] local-lis/les=109/110 n=5 ec=59/49 lis/c=109/67 les/c/f=110/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 110 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65388544 unmapped: 614400 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65404928 unmapped: 598016 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65404928 unmapped: 598016 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65413120 unmapped: 589824 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771602 data_alloc: 218103808 data_used: 192512
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65413120 unmapped: 589824 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.848556519s of 11.976703644s, submitted: 56
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fcee6000/0x0/0x4ffc00000, data 0x9ade1/0x147000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 110 handle_osd_map epochs [111,112], i have 111, src has [1,112]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=86) [2] r=0 lpr=86 crt=55'385 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 45.396604 76 0.000290
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=86) [2] r=0 lpr=86 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active 45.400260 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=86) [2] r=0 lpr=86 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary 46.315436 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=86) [2] r=0 lpr=86 crt=55'385 mlcod 0'0 active mbc={}] exit Started 46.315458 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=86) [2] r=0 lpr=86 crt=55'385 mlcod 0'0 active mbc={}] enter Reset
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111 pruub=10.604092598s) [0] r=-1 lpr=111 pi=[86,111)/1 crt=55'385 mlcod 0'0 active pruub 216.645172119s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111 pruub=10.603497505s) [0] r=-1 lpr=111 pi=[86,111)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.645172119s@ mbc={}] exit Reset 0.000684 1 0.000752
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111 pruub=10.603497505s) [0] r=-1 lpr=111 pi=[86,111)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.645172119s@ mbc={}] enter Started
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111 pruub=10.603497505s) [0] r=-1 lpr=111 pi=[86,111)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.645172119s@ mbc={}] enter Start
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111 pruub=10.603497505s) [0] r=-1 lpr=111 pi=[86,111)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.645172119s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111 pruub=10.603497505s) [0] r=-1 lpr=111 pi=[86,111)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.645172119s@ mbc={}] exit Start 0.000010 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 111 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111 pruub=10.603497505s) [0] r=-1 lpr=111 pi=[86,111)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.645172119s@ mbc={}] enter Started/Stray
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 112 handle_osd_map epochs [112,113], i have 112, src has [1,113]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=-1 lpr=111 pi=[86,111)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.509659 6 0.000066
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=-1 lpr=111 pi=[86,111)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.509764 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=111) [0] r=-1 lpr=111 pi=[86,111)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped mbc={}] exit Reset 0.000578 1 0.000705
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Start
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped mbc={}] exit Start 0.000196 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001175 2 0.000629
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 113 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000128 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000036 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 113 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 540672 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.c deep-scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.c deep-scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 113 handle_osd_map epochs [113,114], i have 113, src has [1,114]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011142 3 0.000351
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.012752 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=86/87 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=55'385 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 62.976404 117 0.000311
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active 62.985318 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary 63.997906 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=55'385 mlcod 0'0 active mbc={}] exit Started 63.997938 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=75) [2] r=0 lpr=75 crt=55'385 mlcod 0'0 active mbc={}] enter Reset
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114 pruub=9.023878098s) [0] r=-1 lpr=114 pi=[75,114)/1 crt=55'385 mlcod 0'0 active pruub 216.589828491s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114 pruub=9.023796082s) [0] r=-1 lpr=114 pi=[75,114)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.589828491s@ mbc={}] exit Reset 0.000127 1 0.000797
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114 pruub=9.023796082s) [0] r=-1 lpr=114 pi=[75,114)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.589828491s@ mbc={}] enter Started
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114 pruub=9.023796082s) [0] r=-1 lpr=114 pi=[75,114)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.589828491s@ mbc={}] enter Start
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114 pruub=9.023796082s) [0] r=-1 lpr=114 pi=[75,114)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.589828491s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114 pruub=9.023796082s) [0] r=-1 lpr=114 pi=[75,114)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.589828491s@ mbc={}] exit Start 0.000011 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114 pruub=9.023796082s) [0] r=-1 lpr=114 pi=[75,114)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 216.589828491s@ mbc={}] enter Started/Stray
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 114 handle_osd_map epochs [114,114], i have 114, src has [1,114]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=86/86 les/c/f=87/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.013166 5 0.000825
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000068 1 0.000067
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000538 1 0.000053
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.059238 2 0.000058
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 114 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65331200 unmapped: 671744 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 114 handle_osd_map epochs [114,115], i have 114, src has [1,115]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.941351 1 0.000139
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active 1.014739 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary 2.027567 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started 2.027881 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[86,113)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] enter Reset
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=-1 lpr=114 pi=[75,114)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013965 3 0.000291
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=-1 lpr=114 pi=[75,114)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.014015 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115 pruub=14.998288155s) [0] async=[0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 55'385 active pruub 223.578338623s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=-1 lpr=114 pi=[75,114)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115 pruub=14.998158455s) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 223.578338623s@ mbc={}] exit Reset 0.000170 1 0.000230
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115 pruub=14.998158455s) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 223.578338623s@ mbc={}] enter Started
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115 pruub=14.998158455s) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 223.578338623s@ mbc={}] enter Start
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115 pruub=14.998158455s) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 223.578338623s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115 pruub=14.998158455s) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 223.578338623s@ mbc={}] exit Start 0.000015 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115 pruub=14.998158455s) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 223.578338623s@ mbc={}] enter Started/Stray
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped mbc={}] exit Reset 0.000426 1 0.000457
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Start
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped mbc={}] exit Start 0.000006 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 115 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.012417 2 0.000571
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 115 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000036 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 115 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65331200 unmapped: 671744 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fced6000/0x0/0x4ffc00000, data 0xa3560/0x156000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 115 handle_osd_map epochs [116,116], i have 116, src has [1,116]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996884 3 0.000116
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.009410 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=75/76 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=76) [2] r=0 lpr=76 crt=55'385 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 63.991223 121 0.000364
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=76) [2] r=0 lpr=76 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active 63.998521 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=76) [2] r=0 lpr=76 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary 65.010268 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=76) [2] r=0 lpr=76 crt=55'385 mlcod 0'0 active mbc={}] exit Started 65.010291 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=76) [2] r=0 lpr=76 crt=55'385 mlcod 0'0 active mbc={}] enter Reset
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116 pruub=8.010222435s) [1] r=-1 lpr=116 pi=[76,116)/1 crt=55'385 mlcod 0'0 active pruub 217.601470947s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116 pruub=8.010166168s) [1] r=-1 lpr=116 pi=[76,116)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 217.601470947s@ mbc={}] exit Reset 0.000095 1 0.000145
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116 pruub=8.010166168s) [1] r=-1 lpr=116 pi=[76,116)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 217.601470947s@ mbc={}] enter Started
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116 pruub=8.010166168s) [1] r=-1 lpr=116 pi=[76,116)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 217.601470947s@ mbc={}] enter Start
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116 pruub=8.010166168s) [1] r=-1 lpr=116 pi=[76,116)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 217.601470947s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116 pruub=8.010166168s) [1] r=-1 lpr=116 pi=[76,116)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 217.601470947s@ mbc={}] exit Start 0.000014 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116 pruub=8.010166168s) [1] r=-1 lpr=116 pi=[76,116)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 217.601470947s@ mbc={}] enter Started/Stray
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.015727 7 0.000122
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000495 1 0.000066
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.006072 5 0.000268
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000068 1 0.000028
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000467 1 0.000055
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 116 handle_osd_map epochs [116,116], i have 116, src has [1,116]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] lb MIN local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=-1 lpr=115 DELETING pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.064913 2 0.000212
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] lb MIN local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.065460 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1c( v 55'385 (0'0,55'385] lb MIN local-lis/les=113/114 n=5 ec=59/49 lis/c=113/86 les/c/f=114/87/0 sis=115) [0] r=-1 lpr=115 pi=[86,115)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.081255 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.095381 2 0.000049
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 116 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 630784 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 116 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.911250 1 0.000081
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=-1 lpr=116 pi=[76,116)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.012657 3 0.000072
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=-1 lpr=116 pi=[76,116)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.012714 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=-1 lpr=116 pi=[76,116)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped mbc={}] exit Reset 0.000084 1 0.000119
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Start
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped mbc={}] exit Start 0.000012 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active 1.014406 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary 2.023856 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started 2.024405 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[75,115)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] enter Reset
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117 pruub=14.991624832s) [0] async=[0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 55'385 active pruub 225.596588135s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117 pruub=14.991396904s) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 225.596588135s@ mbc={}] exit Reset 0.000270 1 0.001286
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117 pruub=14.991396904s) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 225.596588135s@ mbc={}] enter Started
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117 pruub=14.991396904s) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 225.596588135s@ mbc={}] enter Start
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117 pruub=14.991396904s) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 225.596588135s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117 pruub=14.991396904s) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 225.596588135s@ mbc={}] exit Start 0.000100 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117 pruub=14.991396904s) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 225.596588135s@ mbc={}] enter Started/Stray
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001414 2 0.000252
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000039 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 117 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784560 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65445888 unmapped: 557056 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 117 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.017639 3 0.000119
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.019303 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034310 7 0.000306
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000107 1 0.000052
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] lb MIN local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=-1 lpr=117 DELETING pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.040892 2 0.000241
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] lb MIN local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.041043 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1e( v 55'385 (0'0,55'385] lb MIN local-lis/les=115/116 n=5 ec=59/49 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=-1 lpr=117 pi=[75,117)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.075523 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65445888 unmapped: 557056 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 118 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.914904 5 0.001236
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000132 1 0.000159
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000743 1 0.000056
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.043437 2 0.000163
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.054906 1 0.000239
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary/Active 1.014501 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started/Primary 2.033843 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] exit Started 2.034018 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 55'385 active+remapped mbc={255={}}] enter Reset
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119 pruub=15.899600029s) [1] async=[1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 55'385 active pruub 228.537811279s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119 pruub=15.899522781s) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 228.537811279s@ mbc={}] exit Reset 0.000113 1 0.000183
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119 pruub=15.899522781s) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 228.537811279s@ mbc={}] enter Started
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119 pruub=15.899522781s) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 228.537811279s@ mbc={}] enter Start
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119 pruub=15.899522781s) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 228.537811279s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119 pruub=15.899522781s) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 228.537811279s@ mbc={}] exit Start 0.000007 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119 pruub=15.899522781s) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY pruub 228.537811279s@ mbc={}] enter Started/Stray
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 540672 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.128202 6 0.000139
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000415 2 0.000053
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 540672 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] lb MIN local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=-1 lpr=119 DELETING pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.042811 2 0.000167
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] lb MIN local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.043258 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] lb MIN local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=-1 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.171507 0 0.000000
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65454080 unmapped: 548864 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fceca000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 775471 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 540672 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65462272 unmapped: 540672 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.599168777s of 11.808244705s, submitted: 61
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65470464 unmapped: 532480 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 516096 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65486848 unmapped: 516096 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcecb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 775739 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65519616 unmapped: 483328 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65519616 unmapped: 483328 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcecb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65519616 unmapped: 483328 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcecb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65527808 unmapped: 475136 heap: 66002944 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 778035 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 450560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.5 deep-scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.116951942s of 10.145874023s, submitted: 8
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.5 deep-scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65560576 unmapped: 1490944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65568768 unmapped: 1482752 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65568768 unmapped: 1482752 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 781479 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65568768 unmapped: 1482752 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65576960 unmapped: 1474560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65576960 unmapped: 1474560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65585152 unmapped: 1466368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65585152 unmapped: 1466368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 784925 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 1458176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 1458176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65593344 unmapped: 1458176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.908637047s of 10.939035416s, submitted: 10
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65609728 unmapped: 1441792 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 1433600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 787223 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 1417216 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 1417216 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 1417216 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 1409024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 1409024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 788372 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 1409024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 1400832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65642496 unmapped: 1409024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 1400832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 1400832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 790669 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65650688 unmapped: 1400832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 1392640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.093049049s of 14.133566856s, submitted: 10
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 1392640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 1384448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 1376256 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 794112 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65675264 unmapped: 1376256 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 1368064 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65683456 unmapped: 1368064 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 1359872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 1359872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 794112 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 1335296 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 1327104 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 1327104 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 1327104 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 1318912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796408 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 1318912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.196287155s of 13.231869698s, submitted: 10
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 1310720 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 1310720 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65740800 unmapped: 1310720 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 1302528 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 799852 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 1302528 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 1302528 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1286144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1286144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1286144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 800999 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 1277952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1b deep-scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1b deep-scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65773568 unmapped: 1277952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 1269760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 1269760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65789952 unmapped: 1261568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 802148 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1236992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.896893501s of 14.935640335s, submitted: 10
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1236992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 1220608 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 1212416 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 1204224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 805594 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 1204224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65847296 unmapped: 1204224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 1196032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 1196032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 1196032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 805594 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 1187840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.869328499s of 10.890173912s, submitted: 6
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 1187840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 1179648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1171456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1171456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 806743 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65880064 unmapped: 1171456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 1163264 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 1163264 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 1155072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 1155072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 806743 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 1138688 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 1130496 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65921024 unmapped: 1130496 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.008776665s of 12.022459984s, submitted: 2
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 1122304 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 1122304 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 807891 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 1122304 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 1114112 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65937408 unmapped: 1114112 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1105920 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1105920 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810189 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1105920 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 1097728 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 1089536 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65961984 unmapped: 1089536 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 1081344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 811338 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65970176 unmapped: 1081344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.079380035s of 13.108389854s, submitted: 8
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1073152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1073152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1073152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65986560 unmapped: 1064960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 813634 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1056768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 1048576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1056768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1056768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 1048576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 815929 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 1048576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1040384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.708541870s of 10.978280067s, submitted: 10
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 1024000 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 1015808 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.7 deep-scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.7 deep-scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1007616 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 820518 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1007616 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1007616 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 999424 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 999424 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 991232 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 822813 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 991232 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 991232 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 983040 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 983040 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 974848 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 822813 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 974848 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 974848 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.c deep-scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.923019409s of 14.958429337s, submitted: 10
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.c deep-scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 966656 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 958464 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 950272 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 825108 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 950272 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 950272 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 942080 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 942080 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 942080 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 933888 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 933888 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 925696 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 917504 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 917504 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 909312 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 909312 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 901120 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 901120 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 901120 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 892928 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 892928 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 884736 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 884736 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 884736 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 884736 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 876544 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 876544 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 868352 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 868352 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 860160 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 860160 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 860160 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 851968 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 851968 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 851968 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 843776 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 843776 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 843776 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 835584 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 835584 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 835584 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 827392 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 827392 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 819200 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 811008 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 802816 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 802816 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 802816 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 794624 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 794624 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 794624 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 786432 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 786432 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 786432 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 778240 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 778240 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 770048 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 770048 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 770048 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 745472 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 745472 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 745472 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 737280 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 737280 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 729088 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 729088 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 729088 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 729088 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 720896 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 720896 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 712704 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 712704 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 712704 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 704512 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 704512 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 688128 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 688128 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 679936 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 679936 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 679936 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 671744 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 671744 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 655360 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 655360 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 655360 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 647168 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 647168 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 638976 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 638976 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 638976 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 630784 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 630784 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 630784 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 622592 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 622592 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 630784 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 622592 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 614400 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 606208 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 606208 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 606208 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 598016 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 598016 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 589824 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 589824 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 589824 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 573440 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 573440 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 565248 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 557056 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 557056 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 548864 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 548864 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 540672 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 540672 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 540672 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 524288 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 524288 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 507904 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 507904 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 507904 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 491520 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 491520 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 483328 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 483328 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 466944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 466944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 466944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 466944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 458752 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 458752 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 450560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 450560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 442368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 442368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 442368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 434176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 434176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 425984 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 425984 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 417792 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 409600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 409600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 409600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 401408 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 401408 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 401408 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 393216 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 385024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 385024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 368640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 368640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 360448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 360448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 352256 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 344064 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 344064 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 335872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 335872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 335872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 327680 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 327680 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 319488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 319488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 319488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 311296 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 311296 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 303104 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 303104 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 303104 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 294912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 294912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 294912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 286720 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 278528 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 270336 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 270336 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 270336 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 262144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 262144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 262144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 253952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 253952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 253952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 245760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 245760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 245760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 237568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 237568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 229376 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 229376 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 221184 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 221184 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 221184 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 212992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 212992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 212992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 204800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 204800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 204800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 196608 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 196608 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 188416 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 188416 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 180224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 180224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 180224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 172032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 172032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 172032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 163840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 163840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 155648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 155648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 155648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 147456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 147456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 122880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 122880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 122880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 122880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 5482 writes, 23K keys, 5482 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5482 writes, 769 syncs, 7.13 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5482 writes, 23K keys, 5482 commit groups, 1.0 writes per commit group, ingest: 18.33 MB, 0.03 MB/s#012Interval WAL: 5482 writes, 769 syncs, 7.13 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 57344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 57344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 49152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 49152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 49152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 40960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 40960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 40960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 32768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 32768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 24576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 24576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 24576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 8192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 8192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 8192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 0 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 0 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 328.941497803s of 328.962036133s, submitted: 6
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 499712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 262144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 262144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 262144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 212992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 212992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 163840 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 147456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 147456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 90112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 1089536 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14805 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.2 total, 600.0 interval#012Cumulative writes: 5662 writes, 23K keys, 5662 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5662 writes, 859 syncs, 6.59 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 851968 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.769165039s of 600.112915039s, submitted: 90
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 1744896 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:49 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 120 handle_osd_map epochs [121,122], i have 120, src has [1,122]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 375.062835693s of 375.391784668s, submitted: 90
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 9699328 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab4000/0x0/0x4ffc00000, data 0xaf0c9/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70025216 unmapped: 16957440 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 123 ms_handle_reset con 0x556861424400 session 0x556861a3e000
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 16941056 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbab4000/0x0/0x4ffc00000, data 0x10af0c9/0x1169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949338 data_alloc: 218103808 data_used: 221184
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbab0000/0x0/0x4ffc00000, data 0x10b0c85/0x116d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 16809984 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 124 handle_osd_map epochs [124,124], i have 124, src has [1,124]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 124 ms_handle_reset con 0x55685fa43c00 session 0x556861a3e1e0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 16613376 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fbaaa000/0x0/0x4ffc00000, data 0x10b2851/0x1172000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959866 data_alloc: 218103808 data_used: 221184
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959866 data_alloc: 218103808 data_used: 221184
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959866 data_alloc: 218103808 data_used: 221184
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960026 data_alloc: 218103808 data_used: 225280
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960026 data_alloc: 218103808 data_used: 225280
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960026 data_alloc: 218103808 data_used: 225280
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.260654449s of 35.765483856s, submitted: 60
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 16531456 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 126 ms_handle_reset con 0x55685ecad000 session 0x556861bab0e0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42d7/0x1176000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fbaa4000/0x0/0x4ffc00000, data 0x10b5e54/0x1179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 16531456 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964309 data_alloc: 218103808 data_used: 233472
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 16498688 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 127 ms_handle_reset con 0x55685fa43c00 session 0x556861bab680
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 16416768 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 16416768 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 15368192 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 128 ms_handle_reset con 0x55685fee0c00 session 0x55685ff2cf00
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 15376384 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971712 data_alloc: 218103808 data_used: 249856
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fba9e000/0x0/0x4ffc00000, data 0x10b999b/0x117f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 15376384 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 15056896 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 22200320 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.523596764s of 10.044019699s, submitted: 88
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 129 ms_handle_reset con 0x556861424400 session 0x556861a3fc20
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 21004288 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 130 ms_handle_reset con 0x55685fee1000 session 0x55685f1890e0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 130 ms_handle_reset con 0x55685ecad400 session 0x55685ff31e00
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 131 ms_handle_reset con 0x55685fee1c00 session 0x556861b321e0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 131 ms_handle_reset con 0x55685ecad400 session 0x556861b42960
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 131 heartbeat osd_stat(store_statfs(0x4f8a8c000/0x0/0x4ffc00000, data 0x40bfbe2/0x4190000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 20930560 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334958 data_alloc: 218103808 data_used: 266240
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 131 ms_handle_reset con 0x55685fee1000 session 0x556861b33860
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 131 ms_handle_reset con 0x556861424400 session 0x556861b42780
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 19914752 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 132 ms_handle_reset con 0x55685fee0c00 session 0x55685f0854a0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 132 ms_handle_reset con 0x55685fa43c00 session 0x556861b42f00
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f8a8a000/0x0/0x4ffc00000, data 0x40bfc15/0x4192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 19832832 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 133 ms_handle_reset con 0x55685fee1000 session 0x556861b5f4a0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 133 ms_handle_reset con 0x55685ecad400 session 0x55685f048960
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 133 ms_handle_reset con 0x55685fee1c00 session 0x556861b43860
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 133 ms_handle_reset con 0x556861424400 session 0x556861b5f680
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 133 ms_handle_reset con 0x55685fa43c00 session 0x55685f049c20
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 18759680 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 134 ms_handle_reset con 0x55685ecad400 session 0x556861b5fa40
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fba7d000/0x0/0x4ffc00000, data 0x10c5d75/0x119e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 18751488 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 135 ms_handle_reset con 0x55685fee1000 session 0x55685ff2cf00
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 18718720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 136 ms_handle_reset con 0x55685fee1c00 session 0x55685f0854a0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030796 data_alloc: 218103808 data_used: 266240
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 18628608 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 18595840 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 137 ms_handle_reset con 0x556861424800 session 0x556861b74f00
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 137 ms_handle_reset con 0x5568610d1000 session 0x55685e9225a0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 17547264 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba77000/0x0/0x4ffc00000, data 0x10cafaa/0x11a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 17547264 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.963214874s of 11.130927086s, submitted: 311
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 138 ms_handle_reset con 0x55685ecad000 session 0x556861b8c780
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 17514496 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba73000/0x0/0x4ffc00000, data 0x10cda61/0x11a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042431 data_alloc: 218103808 data_used: 278528
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 140 ms_handle_reset con 0x55685ecad400 session 0x556861b8cf00
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 17448960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 17448960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 141 ms_handle_reset con 0x55685fa42800 session 0x556861b8dc20
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 141 ms_handle_reset con 0x55685fa43c00 session 0x55686331a1e0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 17309696 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fba73000/0x0/0x4ffc00000, data 0x10d035e/0x11aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 142 ms_handle_reset con 0x55685ecad400 session 0x556861a32b40
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 17170432 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 143 ms_handle_reset con 0x5568610d1000 session 0x556861b321e0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 143 ms_handle_reset con 0x55685ecad000 session 0x55686331a780
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 143 ms_handle_reset con 0x55685fee1c00 session 0x55686331af00
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 143 ms_handle_reset con 0x55685fa42800 session 0x556860d0c1e0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 17203200 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053039 data_alloc: 218103808 data_used: 290816
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 17170432 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fba6b000/0x0/0x4ffc00000, data 0x10d590a/0x11b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 17170432 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17162240 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17162240 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17162240 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053039 data_alloc: 218103808 data_used: 290816
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fba6b000/0x0/0x4ffc00000, data 0x10d590a/0x11b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 144 handle_osd_map epochs [145,145], i have 145, src has [1,145]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.896071434s of 11.615738869s, submitted: 215
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17145856 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17145856 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17145856 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17145856 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 145 handle_osd_map epochs [146,146], i have 146, src has [1,146]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 17137664 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 147 ms_handle_reset con 0x55685fa42800 session 0x55686331b680
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063545 data_alloc: 218103808 data_used: 290816
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 17080320 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fba62000/0x0/0x4ffc00000, data 0x10dab66/0x11ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 17096704 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 17096704 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 17096704 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 148 handle_osd_map epochs [148,148], i have 148, src has [1,148]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 148 ms_handle_reset con 0x556861a5e800 session 0x55686331bc20
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 148 ms_handle_reset con 0x55685fee1000 session 0x556861a3e960
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 17014784 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 149 ms_handle_reset con 0x556861a5e400 session 0x556861a32b40
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1070073 data_alloc: 218103808 data_used: 307200
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fba5c000/0x0/0x4ffc00000, data 0x10de2b4/0x11c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.863252640s of 10.083137512s, submitted: 82
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 149 ms_handle_reset con 0x55685fc06c00 session 0x556863375c20
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 16809984 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 149 handle_osd_map epochs [150,150], i have 150, src has [1,150]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 150 ms_handle_reset con 0x55685fa42800 session 0x55686331ad20
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 16809984 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 16801792 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 151 ms_handle_reset con 0x55685fee1000 session 0x5568633743c0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16793600 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fba56000/0x0/0x4ffc00000, data 0x10e1a95/0x11c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 152 ms_handle_reset con 0x556861a5e400 session 0x55686331a000
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16793600 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fba51000/0x0/0x4ffc00000, data 0x10e366d/0x11cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082688 data_alloc: 218103808 data_used: 307200
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16793600 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16793600 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16793600 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16793600 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 152 ms_handle_reset con 0x556861a5e800 session 0x5568633743c0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x556861ae1400 session 0x5568633a6000
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x556861ae1000 session 0x55686331b680
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x55685fa42800 session 0x55686331ad20
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x55685fee1c00 session 0x556861a3e960
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x55685fee1000 session 0x556860d0c1e0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x556861a5fc00 session 0x5568632121e0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fba4e000/0x0/0x4ffc00000, data 0x10e512b/0x11cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 16744448 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fba4e000/0x0/0x4ffc00000, data 0x10e512b/0x11cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086691 data_alloc: 218103808 data_used: 315392
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 16744448 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.245035172s of 10.595973969s, submitted: 74
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x55685fa42800 session 0x5568632125a0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78938112 unmapped: 16441344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78938112 unmapped: 16441344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x556861ae1000 session 0x55685f04a960
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78938112 unmapped: 16441344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fba2b000/0x0/0x4ffc00000, data 0x110914a/0x11f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 154 ms_handle_reset con 0x556861a5e800 session 0x55685f250780
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 154 ms_handle_reset con 0x556861a5d000 session 0x5568632130e0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 154 ms_handle_reset con 0x556861a5e400 session 0x556861b752c0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 154 ms_handle_reset con 0x55685fa42800 session 0x556861b42d20
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 16400384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094730 data_alloc: 218103808 data_used: 331776
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 155 ms_handle_reset con 0x556861a5d000 session 0x556861528000
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 16400384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 156 ms_handle_reset con 0x556861a5e800 session 0x556861bab860
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 16400384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 156 ms_handle_reset con 0x556861ae1000 session 0x556861b8c3c0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 156 ms_handle_reset con 0x556861a5d400 session 0x556861b8c1e0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 156 ms_handle_reset con 0x556861a5d400 session 0x556861ab4f00
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 16400384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 16400384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fba1e000/0x0/0x4ffc00000, data 0x110e473/0x11fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 156 ms_handle_reset con 0x55685fa42800 session 0x556861ab4d20
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 16515072 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100812 data_alloc: 218103808 data_used: 335872
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 16498688 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 157 ms_handle_reset con 0x556861a5d000 session 0x5568615290e0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 16498688 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 157 ms_handle_reset con 0x55685fee1000 session 0x556863212f00
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.930842400s of 11.172493935s, submitted: 53
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 157 ms_handle_reset con 0x55685fee1c00 session 0x5568633a61e0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fba1e000/0x0/0x4ffc00000, data 0x111001e/0x11ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [0,0,1])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 16490496 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 158 ms_handle_reset con 0x55685fa42800 session 0x5568632132c0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78905344 unmapped: 16474112 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78905344 unmapped: 16474112 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106820 data_alloc: 218103808 data_used: 327680
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 16457728 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x10ef625/0x11df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 16457728 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 159 ms_handle_reset con 0x55685ecad000 session 0x55686331ba40
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 159 ms_handle_reset con 0x55685ecad400 session 0x556861bab0e0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 159 ms_handle_reset con 0x55685fee1000 session 0x5568633a6780
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 16457728 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 16457728 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 159 ms_handle_reset con 0x55685fee1c00 session 0x5568633a6b40
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 160 ms_handle_reset con 0x55685ecad000 session 0x5568633a72c0
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108734 data_alloc: 218103808 data_used: 331776
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fba3b000/0x0/0x4ffc00000, data 0x10f122e/0x11e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108734 data_alloc: 218103808 data_used: 331776
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fba3b000/0x0/0x4ffc00000, data 0x10f122e/0x11e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.445519447s of 13.752939224s, submitted: 101
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 7658 writes, 29K keys, 7658 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7658 writes, 1723 syncs, 4.44 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1996 writes, 5287 keys, 1996 commit groups, 1.0 writes per commit group, ingest: 2.75 MB, 0.00 MB/s#012Interval WAL: 1996 writes, 864 syncs, 2.31 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: mgrc ms_handle_reset ms_handle_reset con 0x55685f53c000
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/536471675
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/536471675,v1:192.168.122.100:6801/536471675]
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: mgrc handle_mgr_configure stats_period=5
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 16146432 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: do_command 'config diff' '{prefix=config diff}'
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: do_command 'config show' '{prefix=config show}'
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 15843328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 15564800 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 15515648 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:50 np0005533938 ceph-osd[90655]: do_command 'log dump' '{prefix=log dump}'
Nov 24 13:51:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 24 13:51:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3579092568' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 13:51:50 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14809 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:51:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 24 13:51:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2871650015' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 13:51:50 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14813 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:51:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 24 13:51:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3361796308' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 13:51:51 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14817 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2544646524' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 13:51:51 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14821 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.553767) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010311553800, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2311, "num_deletes": 271, "total_data_size": 3545995, "memory_usage": 3612936, "flush_reason": "Manual Compaction"}
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010311570944, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3449696, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20959, "largest_seqno": 23269, "table_properties": {"data_size": 3438945, "index_size": 6989, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22643, "raw_average_key_size": 21, "raw_value_size": 3417313, "raw_average_value_size": 3205, "num_data_blocks": 310, "num_entries": 1066, "num_filter_entries": 1066, "num_deletions": 271, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764010125, "oldest_key_time": 1764010125, "file_creation_time": 1764010311, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 17210 microseconds, and 8086 cpu microseconds.
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.570978) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3449696 bytes OK
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.570993) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.572960) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.572974) EVENT_LOG_v1 {"time_micros": 1764010311572970, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.572989) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3536003, prev total WAL file size 3536003, number of live WAL files 2.
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.573769) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3368KB)], [50(7327KB)]
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010311573795, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10953450, "oldest_snapshot_seqno": -1}
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4851 keys, 9181861 bytes, temperature: kUnknown
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010311631757, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9181861, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9146435, "index_size": 22196, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12165, "raw_key_size": 119155, "raw_average_key_size": 24, "raw_value_size": 9055691, "raw_average_value_size": 1866, "num_data_blocks": 930, "num_entries": 4851, "num_filter_entries": 4851, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764010311, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.631977) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9181861 bytes
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.633447) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.8 rd, 158.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.2 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(5.8) write-amplify(2.7) OK, records in: 5389, records dropped: 538 output_compression: NoCompression
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.633462) EVENT_LOG_v1 {"time_micros": 1764010311633455, "job": 26, "event": "compaction_finished", "compaction_time_micros": 58021, "compaction_time_cpu_micros": 18676, "output_level": 6, "num_output_files": 1, "total_output_size": 9181861, "num_input_records": 5389, "num_output_records": 4851, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010311634018, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010311635249, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.573691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.635285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.635290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.635292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.635293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:51:51 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:51:51.635294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:51:51 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1120: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:52 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.14827 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 13:51:52 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:51:52.104+0000 7f6377bb5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 24 13:51:52 np0005533938 ceph-mgr[75218]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 24 13:51:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 24 13:51:52 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4249350633' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 13:51:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 24 13:51:52 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3376743541' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 13:51:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 24 13:51:52 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/876718826' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 13:51:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:51:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 24 13:51:52 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2467813662' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 13:51:52 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 24 13:51:52 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1751375874' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 13:51:53 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 24 13:51:53 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/349103013' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 13:51:53 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 24 13:51:53 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2330414028' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 13:51:53 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 24 13:51:53 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3021928101' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 13:51:53 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:51:53 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 24 13:51:53 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1007814410' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 13:51:53 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 24 13:51:53 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/647054694' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[59,73)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994158745s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084793091s@ mbc={}] exit Reset 0.000144 1 0.000322
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994158745s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084793091s@ mbc={}] enter Started
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994158745s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084793091s@ mbc={}] enter Start
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994112968s) [2] async=[2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 166.084762573s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994158745s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084793091s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994158745s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084793091s@ mbc={}] exit Start 0.000010 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994158745s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084793091s@ mbc={}] enter Started/Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994071960s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084762573s@ mbc={}] exit Reset 0.000066 1 0.000104
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994071960s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084762573s@ mbc={}] enter Started
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994071960s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084762573s@ mbc={}] enter Start
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994071960s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084762573s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994071960s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084762573s@ mbc={}] exit Start 0.000009 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.994071960s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084762573s@ mbc={}] enter Started/Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.993534088s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084701538s@ mbc={}] exit Reset 0.000797 1 0.000867
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.993534088s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084701538s@ mbc={}] enter Started
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.993534088s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084701538s@ mbc={}] enter Start
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.993534088s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084701538s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.993534088s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084701538s@ mbc={}] exit Start 0.000022 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.993534088s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.084701538s@ mbc={}] enter Started/Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 27.474400 46 0.000155
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 27.485983 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 27.486060 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 27.486097 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525858879s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 163.617202759s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525828362s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617202759s@ mbc={}] exit Reset 0.000062 1 0.000097
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525828362s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617202759s@ mbc={}] enter Started
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525828362s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617202759s@ mbc={}] enter Start
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525828362s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617202759s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525828362s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617202759s@ mbc={}] exit Start 0.000015 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525828362s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617202759s@ mbc={}] enter Started/Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 27.472724 46 0.000152
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 27.484947 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 27.484996 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 27.485018 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525444031s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 163.617691040s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525407791s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617691040s@ mbc={}] exit Reset 0.000057 1 0.000629
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525407791s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617691040s@ mbc={}] enter Started
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525407791s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617691040s@ mbc={}] enter Start
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525407791s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617691040s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525407791s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617691040s@ mbc={}] exit Start 0.000009 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 75 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=12.525407791s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.617691040s@ mbc={}] enter Started/Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70541312 unmapped: 786432 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.015181 3 0.000191
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.015266 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014440 3 0.000049
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.014494 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000063 1 0.000095
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000006 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000027 1 0.000038
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000068 1 0.000110
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000007 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000022 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000034 1 0.000041
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.024054 7 0.000130
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000090 1 0.000035
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029625 7 0.000181
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028996 7 0.000130
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029605 7 0.000371
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000128 1 0.000076
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.1e( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000185 1 0.000083
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.16( v 55'385 (0'0,55'385] local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000261 1 0.000074
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.6( v 55'385 (0'0,55'385] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.e( v 55'385 (0'0,55'385] lb MIN local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 DELETING pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.066580 2 0.000205
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.e( v 55'385 (0'0,55'385] lb MIN local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.066724 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.e( v 55'385 (0'0,55'385] lb MIN local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.090823 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.1e( v 55'385 (0'0,55'385] lb MIN local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 DELETING pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.098167 2 0.000170
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.1e( v 55'385 (0'0,55'385] lb MIN local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.098342 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.1e( v 55'385 (0'0,55'385] lb MIN local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.128032 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.16( v 55'385 (0'0,55'385] lb MIN local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 DELETING pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.127666 2 0.000141
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.16( v 55'385 (0'0,55'385] lb MIN local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.127897 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.16( v 55'385 (0'0,55'385] lb MIN local-lis/les=73/74 n=5 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.156965 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.6( v 55'385 (0'0,55'385] lb MIN local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 DELETING pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.172048 2 0.000192
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.6( v 55'385 (0'0,55'385] lb MIN local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.172349 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 76 pg[9.6( v 55'385 (0'0,55'385] lb MIN local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.202012 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 729088 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 76 handle_osd_map epochs [76,77], i have 77, src has [1,77]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003459 4 0.000063
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003585 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003547 4 0.000064
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003689 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 77 handle_osd_map epochs [77,77], i have 77, src has [1,77]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 77 handle_osd_map epochs [77,77], i have 77, src has [1,77]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.006830 5 0.000656
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.006733 5 0.000516
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000114 1 0.000056
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000400 1 0.000080
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.061615 2 0.000042
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.062176 1 0.000040
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000546 1 0.000040
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.034886 2 0.000041
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 77 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70320128 unmapped: 1007616 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.953204 1 0.000094
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.022519 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.026130 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.026164 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.984064102s) [2] async=[2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 169.117111206s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983811378s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117111206s@ mbc={}] exit Reset 0.000325 1 0.000405
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983811378s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117111206s@ mbc={}] enter Started
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983811378s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117111206s@ mbc={}] enter Start
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983811378s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117111206s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.918314 1 0.000076
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.022942 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.026674 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.026700 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[59,76)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983452797s) [2] async=[2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 169.117080688s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983811378s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117111206s@ mbc={}] exit Start 0.000208 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983373642s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117080688s@ mbc={}] exit Reset 0.000125 1 0.000179
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983373642s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117080688s@ mbc={}] enter Started
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983373642s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117080688s@ mbc={}] enter Start
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983373642s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117080688s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983373642s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117080688s@ mbc={}] exit Start 0.000011 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983373642s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117080688s@ mbc={}] enter Started/Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 78 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78 pruub=14.983811378s) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.117111206s@ mbc={}] enter Started/Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 78 handle_osd_map epochs [78,78], i have 78, src has [1,78]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 78 handle_osd_map epochs [78,78], i have 78, src has [1,78]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 811008 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 78 handle_osd_map epochs [78,79], i have 78, src has [1,79]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025568 7 0.000453
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000164 1 0.000080
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029545 7 0.000673
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000143 1 0.000057
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] lb MIN local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 DELETING pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.041707 2 0.000250
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] lb MIN local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.041948 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.18( v 55'385 (0'0,55'385] lb MIN local-lis/les=76/77 n=5 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.067589 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.18] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] lb MIN local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 DELETING pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.089708 2 0.000171
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] lb MIN local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.089925 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 79 pg[9.8( v 55'385 (0'0,55'385] lb MIN local-lis/les=76/77 n=6 ec=59/49 lis/c=76/59 les/c/f=77/60/0 sis=78) [2] r=-1 lpr=78 pi=[59,78)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.119900 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.8] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 79 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xd427d/0x151000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 645863 data_alloc: 218103808 data_used: 98304
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 704512 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 704512 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 704512 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 79 heartbeat osd_stat(store_statfs(0x4fcacb000/0x0/0x4ffc00000, data 0xd5bd4/0x152000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.853742599s of 11.075757980s, submitted: 70
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 663552 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 655360 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 80 heartbeat osd_stat(store_statfs(0x4fcac8000/0x0/0x4ffc00000, data 0xd7751/0x155000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 653626 data_alloc: 218103808 data_used: 98304
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 573440 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 540672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 540672 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 532480 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.4 deep-scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.4 deep-scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 81 handle_osd_map epochs [82,83], i have 81, src has [1,83]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 40.880109 69 0.000248
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 40.891839 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 40.893175 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 40.893577 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.120257378s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 179.617523193s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.120175362s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617523193s@ mbc={}] exit Reset 0.000126 2 0.000187
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.120175362s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617523193s@ mbc={}] enter Started
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.120175362s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617523193s@ mbc={}] enter Start
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.120175362s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617523193s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.120175362s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617523193s@ mbc={}] exit Start 0.000016 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.120175362s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617523193s@ mbc={}] enter Started/Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 40.878860 69 0.000889
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 40.891128 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 40.891217 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 40.891288 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=59) [1] r=0 lpr=59 crt=55'385 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 82 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.119441986s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 179.617889404s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.119354248s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617889404s@ mbc={}] exit Reset 0.000138 2 0.000205
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.119354248s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617889404s@ mbc={}] enter Started
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.119354248s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617889404s@ mbc={}] enter Start
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.119354248s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617889404s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.119354248s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617889404s@ mbc={}] exit Start 0.000022 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 83 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82 pruub=15.119354248s) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.617889404s@ mbc={}] enter Started/Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 83 handle_osd_map epochs [82,83], i have 83, src has [1,83]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 450560 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.000339 3 0.000121
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.000457 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.001552 3 0.000118
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.001722 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2] r=-1 lpr=82 pi=[59,82)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 24 13:51:54 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000887 1 0.000988
Nov 24 13:51:54 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/533489741' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000806 1 0.001052
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000030 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000090 1 0.000173
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000051 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000434 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000074 1 0.000768
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000079 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000037 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 84 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 666991 data_alloc: 218103808 data_used: 110592
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 442368 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 84 heartbeat osd_stat(store_statfs(0x4fcabf000/0x0/0x4ffc00000, data 0xdc9c8/0x15e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 84 handle_osd_map epochs [85,85], i have 85, src has [1,85]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.070412 4 0.000148
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.070660 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.069340 4 0.000779
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.070255 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.048710 5 0.000387
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000341 1 0.000085
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.048960 5 0.000453
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000590 1 0.000033
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.117051 2 0.000069
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.117694 1 0.000049
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000374 1 0.000050
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 409600 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.207841 2 0.000081
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 85 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.b deep-scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.b deep-scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.441883 1 0.000179
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 0.608824 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 1.679505 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 1.679607 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439636230s) [2] async=[2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 182.619369507s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439560890s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619369507s@ mbc={}] exit Reset 0.000146 1 0.000192
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439560890s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619369507s@ mbc={}] enter Started
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439560890s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619369507s@ mbc={}] enter Start
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439560890s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619369507s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439560890s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619369507s@ mbc={}] exit Start 0.000009 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439560890s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619369507s@ mbc={}] enter Started/Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.234371 1 0.000081
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 0.609595 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 1.680066 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 1.680558 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=84) [2]/[1] async=[2] r=0 lpr=84 pi=[59,84)/1 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439293861s) [2] async=[2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 active pruub 182.619827271s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439209938s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619827271s@ mbc={}] exit Reset 0.000118 1 0.000205
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439209938s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619827271s@ mbc={}] enter Started
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439209938s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619827271s@ mbc={}] enter Start
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439209938s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619827271s@ mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439209938s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619827271s@ mbc={}] exit Start 0.000009 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 86 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86 pruub=15.439209938s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.619827271s@ mbc={}] enter Started/Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 86 handle_osd_map epochs [86,86], i have 86, src has [1,86]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 360448 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 360448 heap: 71327744 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.282258034s of 10.740533829s, submitted: 33
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.809903 6 0.000196
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.809542 6 0.000621
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000443 1 0.000049
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: not registered w/ OSD
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000681 2 0.000139
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: not registered w/ OSD
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] lb MIN local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 DELETING pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.060677 3 0.000330
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] lb MIN local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.061191 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.c( v 55'385 (0'0,55'385] lb MIN local-lis/les=84/85 n=6 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.871144 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.c] failed. State was: not registered w/ OSD
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] lb MIN local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 DELETING pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.111936 2 0.000169
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] lb MIN local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.112713 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 87 pg[9.1c( v 55'385 (0'0,55'385] lb MIN local-lis/les=84/85 n=5 ec=59/49 lis/c=84/59 les/c/f=85/60/0 sis=86) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'385 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.922324 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1c] failed. State was: not registered w/ OSD
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 1400832 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 87 heartbeat osd_stat(store_statfs(0x4fcab3000/0x0/0x4ffc00000, data 0xe3438/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 658095 data_alloc: 218103808 data_used: 118784
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 1400832 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 1392640 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 1392640 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 1392640 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 87 heartbeat osd_stat(store_statfs(0x4fcab5000/0x0/0x4ffc00000, data 0xe3438/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 1368064 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 87 heartbeat osd_stat(store_statfs(0x4fcab5000/0x0/0x4ffc00000, data 0xe3438/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 663010 data_alloc: 218103808 data_used: 126976
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 1368064 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 1343488 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 1343488 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 1335296 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 1335296 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 88 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xe4fb5/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 88 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.799038887s of 10.860842705s, submitted: 43
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 90 handle_osd_map epochs [90,91], i have 90, src has [1,91]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 674738 data_alloc: 218103808 data_used: 135168
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 1310720 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 1310720 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 91 heartbeat osd_stat(store_statfs(0x4fcaa6000/0x0/0x4ffc00000, data 0xea22c/0x175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 1277952 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 1269760 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 91 handle_osd_map epochs [92,94], i have 91, src has [1,94]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 1130496 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 687129 data_alloc: 218103808 data_used: 143360
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1122304 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 94 heartbeat osd_stat(store_statfs(0x4fca9f000/0x0/0x4ffc00000, data 0xef25a/0x17e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 94 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 1114112 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 1114112 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1105920 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1105920 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.946335793s of 10.017564774s, submitted: 35
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693753 data_alloc: 218103808 data_used: 151552
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 1105920 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 1097728 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 96 heartbeat osd_stat(store_statfs(0x4fca9a000/0x0/0x4ffc00000, data 0xf26d5/0x184000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 96 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 1089536 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15(unlocked)] enter Initial
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=0 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000042 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=0 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000024
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000119 1 0.000046
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000032 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000167 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 1081344 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 99 handle_osd_map epochs [99,100], i have 100, src has [1,100]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.007614 2 0.000067
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.007817 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.007839 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=98) [1] r=0 lpr=99 pi=[67,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000075 1 0.000117
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000005 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 100 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.c deep-scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.c deep-scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 100 heartbeat osd_stat(store_statfs(0x4fca8d000/0x0/0x4ffc00000, data 0xf93eb/0x190000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 1024000 heap: 72376320 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 101 pg[9.15( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.023515 5 0.000107
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 101 pg[9.15( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 101 pg[9.15( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: not registered w/ OSD
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 101 pg[9.15( v 55'385 lc 55'152 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.016319 4 0.000127
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 101 pg[9.15( v 55'385 lc 55'152 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 101 pg[9.15( v 55'385 lc 55'152 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000079 1 0.000085
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 101 pg[9.15( v 55'385 lc 55'152 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 101 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.054536 1 0.000102
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 101 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720508 data_alloc: 218103808 data_used: 159744
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 1155072 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.933305 1 0.000033
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.004411 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started 2.027976 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[67,100)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Reset 0.000190 1 0.000282
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Start 0.000065 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000059 1 0.000255
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001237 3 0.000081
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000023 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 102 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 102 heartbeat osd_stat(store_statfs(0x4fca84000/0x0/0x4ffc00000, data 0xfc92b/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 102 heartbeat osd_stat(store_statfs(0x4fca84000/0x0/0x4ffc00000, data 0xfc92b/0x197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 1105920 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 102 handle_osd_map epochs [103,103], i have 103, src has [1,103]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011152 2 0.000186
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.012633 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=100/101 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=102/103 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=102/103 n=5 ec=59/49 lis/c=100/67 les/c/f=101/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=102/103 n=5 ec=59/49 lis/c=102/67 les/c/f=103/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.014761 4 0.000202
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=102/103 n=5 ec=59/49 lis/c=102/67 les/c/f=103/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=102/103 n=5 ec=59/49 lis/c=102/67 les/c/f=103/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000019 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 103 pg[9.15( v 55'385 (0'0,55'385] local-lis/les=102/103 n=5 ec=59/49 lis/c=102/67 les/c/f=103/68/0 sis=102) [1] r=0 lpr=102 pi=[67,102)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 1097728 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 103 heartbeat osd_stat(store_statfs(0x4fca83000/0x0/0x4ffc00000, data 0xfe366/0x19a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 1097728 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 103 heartbeat osd_stat(store_statfs(0x4fca83000/0x0/0x4ffc00000, data 0xfe366/0x19a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 1073152 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 727902 data_alloc: 218103808 data_used: 159744
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 1073152 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 103 heartbeat osd_stat(store_statfs(0x4fca83000/0x0/0x4ffc00000, data 0xfe366/0x19a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.715170860s of 11.885769844s, submitted: 61
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 1064960 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 1064960 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.7 deep-scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.7 deep-scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 1081344 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0x101a60/0x1a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 1081344 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fca79000/0x0/0x4ffc00000, data 0x1035dd/0x1a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 741097 data_alloc: 218103808 data_used: 163840
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 1064960 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 106 handle_osd_map epochs [106,107], i have 106, src has [1,107]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 1056768 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 107 heartbeat osd_stat(store_statfs(0x4fca75000/0x0/0x4ffc00000, data 0x105042/0x1a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 107 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 1048576 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 1040384 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 999424 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 751838 data_alloc: 218103808 data_used: 163840
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 999424 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fca6f000/0x0/0x4ffc00000, data 0x108628/0x1ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 109 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 991232 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 991232 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 110 handle_osd_map epochs [111,112], i have 110, src has [1,112]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.227064133s of 11.310695648s, submitted: 29
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 112 heartbeat osd_stat(store_statfs(0x4fca6b000/0x0/0x4ffc00000, data 0x10a1ad/0x1af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 983040 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 974848 heap: 73424896 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 112 heartbeat osd_stat(store_statfs(0x4fca68000/0x0/0x4ffc00000, data 0x10d8a7/0x1b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 112 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 113 handle_osd_map epochs [113,114], i have 113, src has [1,114]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 766036 data_alloc: 218103808 data_used: 163840
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 2007040 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fca61000/0x0/0x4ffc00000, data 0x110eaa/0x1bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 114 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 1998848 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f(unlocked)] enter Initial
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=0 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000063 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=0 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000039
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000122 1 0.000065
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000030 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000168 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 1990656 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 116 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.007651 2 0.000059
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.007851 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.007895 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=116) [1] r=0 lpr=116 pi=[76,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000162 1 0.000218
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000043 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 117 heartbeat osd_stat(store_statfs(0x4fca59000/0x0/0x4ffc00000, data 0x1144d2/0x1c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 1982464 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 1974272 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 118 pg[9.1f( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.927454 5 0.000149
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 118 pg[9.1f( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 118 pg[9.1f( v 55'385 lc 0'0 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=76/76 les/c/f=77/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 crt=55'385 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 118 pg[9.1f( v 55'385 lc 55'105 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.006233 4 0.000134
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 118 pg[9.1f( v 55'385 lc 55'105 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 118 pg[9.1f( v 55'385 lc 55'105 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000083 1 0.000045
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 118 pg[9.1f( v 55'385 lc 55'105 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 luod=0'0 crt=55'385 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.043930 1 0.000039
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 118 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.061012 1 0.000081
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.111436 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] exit Started 2.039016 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[76,117)/1 luod=0'0 crt=55'385 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 luod=0'0 crt=55'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Reset 0.000261 1 0.000417
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Start
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown mbc={}] exit Start 0.000072 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000067 1 0.000370
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=0/0 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000823 3 0.000155
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 119 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 791141 data_alloc: 218103808 data_used: 172032
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 901120 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.008310 2 0.000132
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009431 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=117/118 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=119/120 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=119/120 n=5 ec=59/49 lis/c=117/76 les/c/f=118/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=119/120 n=5 ec=59/49 lis/c=119/76 les/c/f=120/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009504 3 0.000380
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=119/120 n=5 ec=59/49 lis/c=119/76 les/c/f=120/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=119/120 n=5 ec=59/49 lis/c=119/76 les/c/f=120/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 pg_epoch: 120 pg[9.1f( v 55'385 (0'0,55'385] local-lis/les=119/120 n=5 ec=59/49 lis/c=119/76 les/c/f=120/77/0 sis=119) [1] r=0 lpr=119 pi=[76,119)/1 crt=55'385 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11945e/0x1cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 892928 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 892928 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 884736 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.974453926s of 11.105629921s, submitted: 30
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 884736 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 794169 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 876544 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 868352 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 868352 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 860160 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 860160 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 795316 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 860160 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 851968 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 851968 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 843776 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 843776 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796463 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 835584 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 835584 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 827392 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.15 deep-scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.899918556s of 13.920128822s, submitted: 6
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.15 deep-scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 827392 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 827392 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 797611 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 819200 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 819200 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 811008 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 811008 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 802816 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 798759 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 794624 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 794624 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.17 deep-scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.17 deep-scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 778240 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 778240 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 770048 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 799907 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 770048 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.013246536s of 13.031503677s, submitted: 6
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 761856 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 753664 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 753664 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 745472 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 802203 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 745472 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 745472 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 737280 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 729088 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 720896 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 803351 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 720896 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.022245407s of 10.044643402s, submitted: 6
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 720896 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 712704 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 712704 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 704512 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 805646 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 696320 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 696320 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 688128 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 679936 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 679936 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 807940 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 679936 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 671744 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 671744 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.152551651s of 12.179207802s, submitted: 8
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 663552 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 663552 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.a scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.a scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 811381 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 638976 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 638976 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 614400 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 598016 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 581632 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 814825 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 581632 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 581632 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 573440 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 573440 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.773813248s of 10.820886612s, submitted: 14
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 565248 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817121 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 565248 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 565248 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 557056 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 557056 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 548864 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 819415 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 548864 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 548864 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 532480 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 524288 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 516096 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.804003716s of 11.833882332s, submitted: 8
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 821710 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 499712 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 499712 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 491520 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 483328 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 483328 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 824006 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 483328 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 475136 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 475136 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 491520 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 491520 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 824006 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 491520 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 483328 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.071556091s of 12.114300728s, submitted: 6
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 483328 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 475136 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 466944 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826302 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 466944 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 458752 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 458752 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 450560 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 450560 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 827450 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 442368 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 442368 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 442368 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 434176 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 434176 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 827450 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 425984 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 425984 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.a scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.998170853s of 15.020214081s, submitted: 6
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.a scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 417792 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 417792 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 417792 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828598 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 409600 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.c scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.c scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 409600 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 401408 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 401408 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 393216 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 833193 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 393216 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 393216 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 385024 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.109220505s of 10.150906563s, submitted: 12
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 376832 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 368640 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836639 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 368640 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 368640 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 360448 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 352256 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 344064 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836639 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 344064 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 335872 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 319488 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.945559502s of 10.964863777s, submitted: 6
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 319488 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 311296 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838937 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 311296 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 303104 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 303104 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 294912 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 294912 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.19 deep-scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.19 deep-scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 840086 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 294912 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 278528 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 278528 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 262144 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 262144 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.889883041s of 11.918990135s, submitted: 8
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 843531 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 253952 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 253952 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 253952 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 245760 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 229376 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 846976 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 221184 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 221184 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 212992 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 212992 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 212992 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 846976 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 204800 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 204800 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 196608 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.1f deep-scrub starts
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.983187675s of 13.045228958s, submitted: 8
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: log_channel(cluster) log [DBG] : 9.1f deep-scrub ok
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 196608 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 188416 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 188416 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 188416 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 180224 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 180224 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 172032 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 172032 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 172032 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 163840 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 163840 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 155648 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 155648 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 147456 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 147456 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 147456 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 139264 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 139264 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 131072 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 131072 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 114688 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 114688 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 106496 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 106496 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 106496 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 106496 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 106496 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 106496 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 98304 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 98304 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 98304 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 90112 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 90112 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 81920 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 81920 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 73728 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 73728 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 65536 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 65536 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 65536 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 57344 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 57344 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 49152 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 49152 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 40960 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 40960 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 40960 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 32768 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 32768 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 24576 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 24576 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 16384 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 16384 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 16384 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 8192 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 8192 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 0 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 0 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 0 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1040384 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1040384 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 1032192 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 1032192 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 1032192 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 1024000 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1015808 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 1007616 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 1007616 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 999424 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 999424 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 991232 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 991232 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 991232 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 983040 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 983040 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 974848 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 974848 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 974848 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 966656 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 966656 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 950272 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 950272 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 942080 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 942080 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 942080 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 933888 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 933888 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 925696 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 925696 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 917504 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 917504 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 909312 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 909312 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 901120 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 901120 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 892928 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 884736 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 884736 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 876544 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 876544 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 876544 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 868352 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 868352 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 860160 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 860160 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 843776 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 843776 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 843776 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 835584 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 835584 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 819200 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 819200 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 819200 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 811008 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 811008 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 811008 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 802816 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 802816 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 794624 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 794624 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 786432 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 786432 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 786432 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 778240 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 778240 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 770048 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 770048 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 770048 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 761856 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 761856 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 753664 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 753664 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 745472 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 745472 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 745472 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 737280 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 737280 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 729088 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 729088 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 720896 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 720896 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 720896 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 712704 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 712704 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 704512 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 704512 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 696320 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 696320 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 696320 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 688128 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 688128 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 688128 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 679936 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 679936 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 671744 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 663552 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 655360 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 655360 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 655360 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 647168 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 655360 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 647168 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 647168 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 647168 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 638976 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 622592 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 622592 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 614400 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 614400 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 606208 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 606208 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 598016 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 598016 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 598016 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 589824 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 589824 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 581632 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 581632 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 573440 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 573440 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 573440 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 565248 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 565248 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 557056 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 557056 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 548864 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 548864 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 548864 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 540672 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 540672 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 540672 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 532480 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 532480 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 524288 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 524288 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 516096 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 516096 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 516096 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 507904 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 507904 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 499712 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 499712 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 491520 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 491520 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 491520 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 475136 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 475136 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 466944 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 466944 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 458752 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 458752 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 458752 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 450560 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 450560 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 434176 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 434176 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 434176 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 425984 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 425984 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 417792 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 417792 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 417792 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 409600 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 409600 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 401408 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 401408 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 401408 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 393216 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 393216 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 385024 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 601.0 total, 600.0 interval#012Cumulative writes: 6505 writes, 27K keys, 6505 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6505 writes, 1119 syncs, 5.81 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6505 writes, 27K keys, 6505 commit groups, 1.0 writes per commit group, ingest: 19.27 MB, 0.03 MB/s#012Interval WAL: 6505 writes, 1119 syncs, 5.81 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 319488 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 311296 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 311296 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 303104 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 303104 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 303104 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 294912 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 294912 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 294912 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 286720 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 278528 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 278528 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 278528 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 270336 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 270336 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 270336 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 262144 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 262144 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 253952 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 245760 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 237568 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 237568 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 237568 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 229376 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 229376 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 212992 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 212992 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 212992 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 204800 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 204800 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 196608 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 196608 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 188416 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 188416 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 180224 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 180224 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 180224 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 172032 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 180224 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 180224 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 172032 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 172032 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 163840 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 163840 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 155648 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 155648 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 147456 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 147456 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 147456 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 139264 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 139264 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 131072 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 131072 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 131072 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 122880 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 122880 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 114688 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 114688 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 106496 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 106496 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 106496 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 98304 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 98304 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 98304 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 90112 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 90112 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 81920 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 81920 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 81920 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 73728 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 73728 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 65536 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 310.267303467s of 310.278106689s, submitted: 2
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 57344 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 2023424 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 2015232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 2015232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 2015232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 2023424 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 2023424 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 2023424 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 2023424 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 2015232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 2015232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 2015232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 2007040 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 2007040 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75620352 unmapped: 1998848 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 1990656 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 1990656 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 1982464 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 1982464 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 1974272 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 1974272 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 1974272 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 1966080 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 1966080 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 1957888 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 1957888 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 1949696 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 1949696 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 1949696 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 1941504 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 1941504 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 1933312 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 1933312 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 1933312 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 1925120 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 1908736 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 1900544 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 1900544 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 1900544 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 1892352 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 1892352 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1875968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1875968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1875968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1875968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1875968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1875968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 1843200 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 1843200 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 1843200 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 1802240 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 1802240 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1785856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1785856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1785856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1785856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1785856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1785856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1769472 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1769472 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1769472 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1769472 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1753088 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1753088 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1753088 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1753088 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1753088 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 ms_handle_reset con 0x560b41ff1400 session 0x560b413a9860
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 ms_handle_reset con 0x560b42032000 session 0x560b41b383c0
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 1728512 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 1728512 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 1728512 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 1728512 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 1654784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 1654784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 1654784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 1654784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:51:54 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:55:16 np0005533938 rsyslogd[1008]: imjournal: 15850 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 24 13:55:16 np0005533938 nova_compute[270693]: 2025-11-24 18:55:16.703 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:55:16 np0005533938 nova_compute[270693]: 2025-11-24 18:55:16.704 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:55:17 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1223: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:55:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:55:18 np0005533938 nova_compute[270693]: 2025-11-24 18:55:18.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:55:18 np0005533938 nova_compute[270693]: 2025-11-24 18:55:18.529 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 13:55:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 13:55:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2092914081' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 13:55:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 13:55:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2092914081' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 13:55:19 np0005533938 nova_compute[270693]: 2025-11-24 18:55:19.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:55:19 np0005533938 nova_compute[270693]: 2025-11-24 18:55:19.530 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:55:19 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1224: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:55:20 np0005533938 nova_compute[270693]: 2025-11-24 18:55:20.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:55:20 np0005533938 nova_compute[270693]: 2025-11-24 18:55:20.530 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:55:21 np0005533938 nova_compute[270693]: 2025-11-24 18:55:21.527 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:55:21 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1225: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:55:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:55:22.752 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:55:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:55:22.752 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:55:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:55:22.752 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:55:23 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:55:23 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1226: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:55:25 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:55:27 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1228: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:55:28 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:55:29 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1229: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.168323) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010531168379, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1530, "num_deletes": 501, "total_data_size": 1978710, "memory_usage": 2007920, "flush_reason": "Manual Compaction"}
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010531183139, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1934495, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24360, "largest_seqno": 25889, "table_properties": {"data_size": 1927782, "index_size": 3403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17528, "raw_average_key_size": 19, "raw_value_size": 1912341, "raw_average_value_size": 2141, "num_data_blocks": 153, "num_entries": 893, "num_filter_entries": 893, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764010399, "oldest_key_time": 1764010399, "file_creation_time": 1764010531, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 14892 microseconds, and 8858 cpu microseconds.
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.183213) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1934495 bytes OK
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.183244) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.185062) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.185092) EVENT_LOG_v1 {"time_micros": 1764010531185082, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.185190) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1970955, prev total WAL file size 1970955, number of live WAL files 2.
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.186439) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1889KB)], [56(10MB)]
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010531186490, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12430084, "oldest_snapshot_seqno": -1}
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4829 keys, 7429202 bytes, temperature: kUnknown
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010531245160, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7429202, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7396567, "index_size": 19469, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12101, "raw_key_size": 121324, "raw_average_key_size": 25, "raw_value_size": 7308802, "raw_average_value_size": 1513, "num_data_blocks": 802, "num_entries": 4829, "num_filter_entries": 4829, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764010531, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.245529) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7429202 bytes
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.246921) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 211.5 rd, 126.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.0 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(10.3) write-amplify(3.8) OK, records in: 5843, records dropped: 1014 output_compression: NoCompression
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.246968) EVENT_LOG_v1 {"time_micros": 1764010531246951, "job": 30, "event": "compaction_finished", "compaction_time_micros": 58778, "compaction_time_cpu_micros": 33868, "output_level": 6, "num_output_files": 1, "total_output_size": 7429202, "num_input_records": 5843, "num_output_records": 4829, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010531247366, "job": 30, "event": "table_file_deletion", "file_number": 58}
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010531249092, "job": 30, "event": "table_file_deletion", "file_number": 56}
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.186317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.249147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.249154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.249157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.249160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:55:31 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:55:31.249162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:55:31 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:55:33 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:55:33 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1231: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:55:34 np0005533938 nova_compute[270693]: 2025-11-24 18:55:34.146 270697 DEBUG oslo_concurrency.processutils [None req-6ece738e-cb4a-43c0-9acf-dac429c31015 129aaec41c194fc181333dedde345fb5 a9452fe831594f6ba61571a76d883af5 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:55:34 np0005533938 nova_compute[270693]: 2025-11-24 18:55:34.171 270697 DEBUG oslo_concurrency.processutils [None req-6ece738e-cb4a-43c0-9acf-dac429c31015 129aaec41c194fc181333dedde345fb5 a9452fe831594f6ba61571a76d883af5 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:55:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:55:34
Nov 24 13:55:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:55:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:55:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'default.rgw.meta', 'volumes', '.rgw.root', '.mgr']
Nov 24 13:55:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:55:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:55:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:55:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:55:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:55:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:55:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:55:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:55:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:55:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:55:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:55:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:55:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:55:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:55:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:55:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:55:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:55:35 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:55:37 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1233: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:55:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:55:39 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1234: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:55:41 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:55:41.086 179763 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:2b:64', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'fa:26:5b:32:fa:ba'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 24 13:55:41 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:55:41.088 179763 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 24 13:55:41 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1235: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:55:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:55:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:55:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:55:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:55:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:55:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 13:55:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:55:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 13:55:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:55:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:55:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:55:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 13:55:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:55:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:55:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:55:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:55:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:55:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:55:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:55:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:55:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:55:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:55:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:55:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:55:43 np0005533938 podman[293160]: 2025-11-24 18:55:43.983607222 +0000 UTC m=+0.078642603 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 24 13:55:43 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1236: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:55:44 np0005533938 podman[293162]: 2025-11-24 18:55:44.006196325 +0000 UTC m=+0.082960730 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 24 13:55:44 np0005533938 podman[293161]: 2025-11-24 18:55:44.026611114 +0000 UTC m=+0.114347377 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 13:55:45 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:55:45.089 179763 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=302e9f34-0427-4ff9-a29b-2fc7b5250666, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 24 13:55:46 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1237: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:55:48 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1238: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:55:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:55:50 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1239: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:55:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:55:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:55:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:55:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:55:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:55:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:55:50 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 6c3bd060-c136-42a9-b9e8-916bef5a3198 does not exist
Nov 24 13:55:50 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev da245d16-190c-4a9d-adc6-82437747e5d7 does not exist
Nov 24 13:55:50 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 6c32bbf6-64b9-4ec2-9494-635e2f78484f does not exist
Nov 24 13:55:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:55:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:55:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:55:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:55:50 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:55:50 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:55:51 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:55:51 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:55:51 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:55:51 np0005533938 podman[293496]: 2025-11-24 18:55:51.392063172 +0000 UTC m=+0.038104552 container create b9610bc2ab4f358201fdd642c7c3ce8437caa8e8b86f9de14130315df0152152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 24 13:55:51 np0005533938 systemd[1]: Started libpod-conmon-b9610bc2ab4f358201fdd642c7c3ce8437caa8e8b86f9de14130315df0152152.scope.
Nov 24 13:55:51 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:55:51 np0005533938 podman[293496]: 2025-11-24 18:55:51.46475748 +0000 UTC m=+0.110798850 container init b9610bc2ab4f358201fdd642c7c3ce8437caa8e8b86f9de14130315df0152152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 24 13:55:51 np0005533938 podman[293496]: 2025-11-24 18:55:51.373250512 +0000 UTC m=+0.019291902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:55:51 np0005533938 podman[293496]: 2025-11-24 18:55:51.472059348 +0000 UTC m=+0.118100718 container start b9610bc2ab4f358201fdd642c7c3ce8437caa8e8b86f9de14130315df0152152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermi, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 13:55:51 np0005533938 podman[293496]: 2025-11-24 18:55:51.475294537 +0000 UTC m=+0.121335917 container attach b9610bc2ab4f358201fdd642c7c3ce8437caa8e8b86f9de14130315df0152152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 24 13:55:51 np0005533938 blissful_fermi[293513]: 167 167
Nov 24 13:55:51 np0005533938 systemd[1]: libpod-b9610bc2ab4f358201fdd642c7c3ce8437caa8e8b86f9de14130315df0152152.scope: Deactivated successfully.
Nov 24 13:55:51 np0005533938 podman[293496]: 2025-11-24 18:55:51.477099181 +0000 UTC m=+0.123140562 container died b9610bc2ab4f358201fdd642c7c3ce8437caa8e8b86f9de14130315df0152152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:55:51 np0005533938 systemd[1]: var-lib-containers-storage-overlay-8e491108407580a37cf6b65dc11a77c67c5d145428d56be8862db381261b56b8-merged.mount: Deactivated successfully.
Nov 24 13:55:51 np0005533938 podman[293496]: 2025-11-24 18:55:51.512725253 +0000 UTC m=+0.158766623 container remove b9610bc2ab4f358201fdd642c7c3ce8437caa8e8b86f9de14130315df0152152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 13:55:51 np0005533938 systemd[1]: libpod-conmon-b9610bc2ab4f358201fdd642c7c3ce8437caa8e8b86f9de14130315df0152152.scope: Deactivated successfully.
Nov 24 13:55:51 np0005533938 podman[293537]: 2025-11-24 18:55:51.717027738 +0000 UTC m=+0.061390802 container create 992494f45183997d4c999a6854683202296ca1efcbeca431f5e40ce8eae046b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_sammet, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 13:55:51 np0005533938 systemd[1]: Started libpod-conmon-992494f45183997d4c999a6854683202296ca1efcbeca431f5e40ce8eae046b1.scope.
Nov 24 13:55:51 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:55:51 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe76f457d31beed41a31da74d42b353dbab6b9fb80c01040c8cdb1058464435/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:55:51 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe76f457d31beed41a31da74d42b353dbab6b9fb80c01040c8cdb1058464435/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:55:51 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe76f457d31beed41a31da74d42b353dbab6b9fb80c01040c8cdb1058464435/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:55:51 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe76f457d31beed41a31da74d42b353dbab6b9fb80c01040c8cdb1058464435/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:55:51 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe76f457d31beed41a31da74d42b353dbab6b9fb80c01040c8cdb1058464435/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:55:51 np0005533938 podman[293537]: 2025-11-24 18:55:51.696392103 +0000 UTC m=+0.040755197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:55:51 np0005533938 podman[293537]: 2025-11-24 18:55:51.800276003 +0000 UTC m=+0.144639127 container init 992494f45183997d4c999a6854683202296ca1efcbeca431f5e40ce8eae046b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_sammet, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:55:51 np0005533938 podman[293537]: 2025-11-24 18:55:51.811560579 +0000 UTC m=+0.155923653 container start 992494f45183997d4c999a6854683202296ca1efcbeca431f5e40ce8eae046b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_sammet, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 13:55:51 np0005533938 podman[293537]: 2025-11-24 18:55:51.815390223 +0000 UTC m=+0.159753307 container attach 992494f45183997d4c999a6854683202296ca1efcbeca431f5e40ce8eae046b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_sammet, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 13:55:52 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1240: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:55:52 np0005533938 cranky_sammet[293554]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:55:52 np0005533938 cranky_sammet[293554]: --> relative data size: 1.0
Nov 24 13:55:52 np0005533938 cranky_sammet[293554]: --> All data devices are unavailable
Nov 24 13:55:52 np0005533938 systemd[1]: libpod-992494f45183997d4c999a6854683202296ca1efcbeca431f5e40ce8eae046b1.scope: Deactivated successfully.
Nov 24 13:55:52 np0005533938 podman[293537]: 2025-11-24 18:55:52.75348125 +0000 UTC m=+1.097844294 container died 992494f45183997d4c999a6854683202296ca1efcbeca431f5e40ce8eae046b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 24 13:55:52 np0005533938 systemd[1]: var-lib-containers-storage-overlay-1fe76f457d31beed41a31da74d42b353dbab6b9fb80c01040c8cdb1058464435-merged.mount: Deactivated successfully.
Nov 24 13:55:52 np0005533938 podman[293537]: 2025-11-24 18:55:52.811368725 +0000 UTC m=+1.155731769 container remove 992494f45183997d4c999a6854683202296ca1efcbeca431f5e40ce8eae046b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:55:52 np0005533938 systemd[1]: libpod-conmon-992494f45183997d4c999a6854683202296ca1efcbeca431f5e40ce8eae046b1.scope: Deactivated successfully.
Nov 24 13:55:53 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:55:53 np0005533938 podman[293735]: 2025-11-24 18:55:53.375346294 +0000 UTC m=+0.042471059 container create 1617e69b0f72614e4ff313417bb08bb4b0f94d852b7b84c142354164caecbae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 24 13:55:53 np0005533938 systemd[1]: Started libpod-conmon-1617e69b0f72614e4ff313417bb08bb4b0f94d852b7b84c142354164caecbae3.scope.
Nov 24 13:55:53 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:55:53 np0005533938 podman[293735]: 2025-11-24 18:55:53.354583226 +0000 UTC m=+0.021708081 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:55:53 np0005533938 podman[293735]: 2025-11-24 18:55:53.459516272 +0000 UTC m=+0.126641087 container init 1617e69b0f72614e4ff313417bb08bb4b0f94d852b7b84c142354164caecbae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_knuth, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 13:55:53 np0005533938 podman[293735]: 2025-11-24 18:55:53.465823386 +0000 UTC m=+0.132948161 container start 1617e69b0f72614e4ff313417bb08bb4b0f94d852b7b84c142354164caecbae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:55:53 np0005533938 podman[293735]: 2025-11-24 18:55:53.46925411 +0000 UTC m=+0.136378895 container attach 1617e69b0f72614e4ff313417bb08bb4b0f94d852b7b84c142354164caecbae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_knuth, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:55:53 np0005533938 gifted_knuth[293751]: 167 167
Nov 24 13:55:53 np0005533938 systemd[1]: libpod-1617e69b0f72614e4ff313417bb08bb4b0f94d852b7b84c142354164caecbae3.scope: Deactivated successfully.
Nov 24 13:55:53 np0005533938 podman[293735]: 2025-11-24 18:55:53.473712709 +0000 UTC m=+0.140837534 container died 1617e69b0f72614e4ff313417bb08bb4b0f94d852b7b84c142354164caecbae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_knuth, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 13:55:53 np0005533938 systemd[1]: var-lib-containers-storage-overlay-8824c7643a013a342276f1dbb0f38be000e5748fc3b1b9e46242d1b2b6868673-merged.mount: Deactivated successfully.
Nov 24 13:55:53 np0005533938 podman[293735]: 2025-11-24 18:55:53.516569147 +0000 UTC m=+0.183693912 container remove 1617e69b0f72614e4ff313417bb08bb4b0f94d852b7b84c142354164caecbae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_knuth, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:55:53 np0005533938 systemd[1]: libpod-conmon-1617e69b0f72614e4ff313417bb08bb4b0f94d852b7b84c142354164caecbae3.scope: Deactivated successfully.
Nov 24 13:55:53 np0005533938 podman[293775]: 2025-11-24 18:55:53.671802492 +0000 UTC m=+0.041865504 container create 7395125d717f88ca8e4999c5725265641ee7ef2dc47310d21a6bfca5a5033c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:55:53 np0005533938 systemd[1]: Started libpod-conmon-7395125d717f88ca8e4999c5725265641ee7ef2dc47310d21a6bfca5a5033c5b.scope.
Nov 24 13:55:53 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:55:53 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88106b21d16ecbbd28e6435fd076af3ce391a1a7e725d9b822248d3998cf7c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:55:53 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88106b21d16ecbbd28e6435fd076af3ce391a1a7e725d9b822248d3998cf7c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:55:53 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88106b21d16ecbbd28e6435fd076af3ce391a1a7e725d9b822248d3998cf7c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:55:53 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88106b21d16ecbbd28e6435fd076af3ce391a1a7e725d9b822248d3998cf7c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:55:53 np0005533938 podman[293775]: 2025-11-24 18:55:53.651442645 +0000 UTC m=+0.021505637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:55:53 np0005533938 podman[293775]: 2025-11-24 18:55:53.75061794 +0000 UTC m=+0.120680912 container init 7395125d717f88ca8e4999c5725265641ee7ef2dc47310d21a6bfca5a5033c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:55:53 np0005533938 podman[293775]: 2025-11-24 18:55:53.756119474 +0000 UTC m=+0.126182436 container start 7395125d717f88ca8e4999c5725265641ee7ef2dc47310d21a6bfca5a5033c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_sinoussi, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 24 13:55:53 np0005533938 podman[293775]: 2025-11-24 18:55:53.758879602 +0000 UTC m=+0.128942594 container attach 7395125d717f88ca8e4999c5725265641ee7ef2dc47310d21a6bfca5a5033c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 13:55:54 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]: {
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:    "0": [
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:        {
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "devices": [
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "/dev/loop3"
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            ],
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "lv_name": "ceph_lv0",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "lv_size": "21470642176",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "name": "ceph_lv0",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "tags": {
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.cluster_name": "ceph",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.crush_device_class": "",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.encrypted": "0",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.osd_id": "0",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.type": "block",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.vdo": "0"
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            },
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "type": "block",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "vg_name": "ceph_vg0"
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:        }
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:    ],
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:    "1": [
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:        {
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "devices": [
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "/dev/loop4"
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            ],
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "lv_name": "ceph_lv1",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "lv_size": "21470642176",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "name": "ceph_lv1",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "tags": {
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.cluster_name": "ceph",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.crush_device_class": "",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.encrypted": "0",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.osd_id": "1",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.type": "block",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.vdo": "0"
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            },
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "type": "block",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "vg_name": "ceph_vg1"
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:        }
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:    ],
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:    "2": [
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:        {
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "devices": [
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "/dev/loop5"
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            ],
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "lv_name": "ceph_lv2",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "lv_size": "21470642176",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "name": "ceph_lv2",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "tags": {
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.cluster_name": "ceph",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.crush_device_class": "",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.encrypted": "0",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.osd_id": "2",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.type": "block",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:                "ceph.vdo": "0"
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            },
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "type": "block",
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:            "vg_name": "ceph_vg2"
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:        }
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]:    ]
Nov 24 13:55:54 np0005533938 relaxed_sinoussi[293791]: }
Nov 24 13:55:54 np0005533938 systemd[1]: libpod-7395125d717f88ca8e4999c5725265641ee7ef2dc47310d21a6bfca5a5033c5b.scope: Deactivated successfully.
Nov 24 13:55:54 np0005533938 podman[293775]: 2025-11-24 18:55:54.499659014 +0000 UTC m=+0.869722016 container died 7395125d717f88ca8e4999c5725265641ee7ef2dc47310d21a6bfca5a5033c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_sinoussi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:55:54 np0005533938 systemd[1]: var-lib-containers-storage-overlay-b88106b21d16ecbbd28e6435fd076af3ce391a1a7e725d9b822248d3998cf7c5-merged.mount: Deactivated successfully.
Nov 24 13:55:54 np0005533938 podman[293775]: 2025-11-24 18:55:54.558581695 +0000 UTC m=+0.928644657 container remove 7395125d717f88ca8e4999c5725265641ee7ef2dc47310d21a6bfca5a5033c5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_sinoussi, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:55:54 np0005533938 systemd[1]: libpod-conmon-7395125d717f88ca8e4999c5725265641ee7ef2dc47310d21a6bfca5a5033c5b.scope: Deactivated successfully.
Nov 24 13:55:55 np0005533938 podman[293952]: 2025-11-24 18:55:55.174059584 +0000 UTC m=+0.038316268 container create 910a13c29801c6e734ac29546efd895f80ce4fa9613eaab0fc88119eb166577a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_vaughan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:55:55 np0005533938 systemd[1]: Started libpod-conmon-910a13c29801c6e734ac29546efd895f80ce4fa9613eaab0fc88119eb166577a.scope.
Nov 24 13:55:55 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:55:55 np0005533938 podman[293952]: 2025-11-24 18:55:55.242995959 +0000 UTC m=+0.107252653 container init 910a13c29801c6e734ac29546efd895f80ce4fa9613eaab0fc88119eb166577a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 24 13:55:55 np0005533938 podman[293952]: 2025-11-24 18:55:55.248646627 +0000 UTC m=+0.112903301 container start 910a13c29801c6e734ac29546efd895f80ce4fa9613eaab0fc88119eb166577a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_vaughan, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 13:55:55 np0005533938 podman[293952]: 2025-11-24 18:55:55.15632248 +0000 UTC m=+0.020579184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:55:55 np0005533938 podman[293952]: 2025-11-24 18:55:55.251290092 +0000 UTC m=+0.115546776 container attach 910a13c29801c6e734ac29546efd895f80ce4fa9613eaab0fc88119eb166577a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:55:55 np0005533938 practical_vaughan[293968]: 167 167
Nov 24 13:55:55 np0005533938 systemd[1]: libpod-910a13c29801c6e734ac29546efd895f80ce4fa9613eaab0fc88119eb166577a.scope: Deactivated successfully.
Nov 24 13:55:55 np0005533938 podman[293952]: 2025-11-24 18:55:55.253316521 +0000 UTC m=+0.117573205 container died 910a13c29801c6e734ac29546efd895f80ce4fa9613eaab0fc88119eb166577a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 24 13:55:55 np0005533938 systemd[1]: var-lib-containers-storage-overlay-4987606d5cb081bf442c52b8586754bb6308ead8bf033018b72346c613abd7c5-merged.mount: Deactivated successfully.
Nov 24 13:55:55 np0005533938 podman[293952]: 2025-11-24 18:55:55.339249363 +0000 UTC m=+0.203506037 container remove 910a13c29801c6e734ac29546efd895f80ce4fa9613eaab0fc88119eb166577a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_vaughan, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 13:55:55 np0005533938 systemd[1]: libpod-conmon-910a13c29801c6e734ac29546efd895f80ce4fa9613eaab0fc88119eb166577a.scope: Deactivated successfully.
Nov 24 13:55:55 np0005533938 podman[293994]: 2025-11-24 18:55:55.504527424 +0000 UTC m=+0.048234851 container create cfd1028f28380da0400705cc27e3fb2e43da58d42170f3568841c9f7c57e16d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 24 13:55:55 np0005533938 systemd[1]: Started libpod-conmon-cfd1028f28380da0400705cc27e3fb2e43da58d42170f3568841c9f7c57e16d5.scope.
Nov 24 13:55:55 np0005533938 podman[293994]: 2025-11-24 18:55:55.479099442 +0000 UTC m=+0.022806779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:55:55 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:55:55 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdf2804f16c26d08481c99a0775a6c5d92f516fb233480824dc65f7c02451bcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:55:55 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdf2804f16c26d08481c99a0775a6c5d92f516fb233480824dc65f7c02451bcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:55:55 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdf2804f16c26d08481c99a0775a6c5d92f516fb233480824dc65f7c02451bcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:55:55 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdf2804f16c26d08481c99a0775a6c5d92f516fb233480824dc65f7c02451bcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:55:55 np0005533938 podman[293994]: 2025-11-24 18:55:55.587479842 +0000 UTC m=+0.131187089 container init cfd1028f28380da0400705cc27e3fb2e43da58d42170f3568841c9f7c57e16d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 24 13:55:55 np0005533938 podman[293994]: 2025-11-24 18:55:55.59927278 +0000 UTC m=+0.142980027 container start cfd1028f28380da0400705cc27e3fb2e43da58d42170f3568841c9f7c57e16d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 13:55:55 np0005533938 podman[293994]: 2025-11-24 18:55:55.603613257 +0000 UTC m=+0.147320524 container attach cfd1028f28380da0400705cc27e3fb2e43da58d42170f3568841c9f7c57e16d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:55:56 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:55:56 np0005533938 gallant_ride[294010]: {
Nov 24 13:55:56 np0005533938 gallant_ride[294010]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:55:56 np0005533938 gallant_ride[294010]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:55:56 np0005533938 gallant_ride[294010]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:55:56 np0005533938 gallant_ride[294010]:        "osd_id": 0,
Nov 24 13:55:56 np0005533938 gallant_ride[294010]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:55:56 np0005533938 gallant_ride[294010]:        "type": "bluestore"
Nov 24 13:55:56 np0005533938 gallant_ride[294010]:    },
Nov 24 13:55:56 np0005533938 gallant_ride[294010]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:55:56 np0005533938 gallant_ride[294010]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:55:56 np0005533938 gallant_ride[294010]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:55:56 np0005533938 gallant_ride[294010]:        "osd_id": 1,
Nov 24 13:55:56 np0005533938 gallant_ride[294010]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:55:56 np0005533938 gallant_ride[294010]:        "type": "bluestore"
Nov 24 13:55:56 np0005533938 gallant_ride[294010]:    },
Nov 24 13:55:56 np0005533938 gallant_ride[294010]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:55:56 np0005533938 gallant_ride[294010]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:55:56 np0005533938 gallant_ride[294010]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:55:56 np0005533938 gallant_ride[294010]:        "osd_id": 2,
Nov 24 13:55:56 np0005533938 gallant_ride[294010]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:55:56 np0005533938 gallant_ride[294010]:        "type": "bluestore"
Nov 24 13:55:56 np0005533938 gallant_ride[294010]:    }
Nov 24 13:55:56 np0005533938 gallant_ride[294010]: }
Nov 24 13:55:56 np0005533938 systemd[1]: libpod-cfd1028f28380da0400705cc27e3fb2e43da58d42170f3568841c9f7c57e16d5.scope: Deactivated successfully.
Nov 24 13:55:56 np0005533938 podman[293994]: 2025-11-24 18:55:56.585521155 +0000 UTC m=+1.129228412 container died cfd1028f28380da0400705cc27e3fb2e43da58d42170f3568841c9f7c57e16d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:55:56 np0005533938 systemd[1]: var-lib-containers-storage-overlay-bdf2804f16c26d08481c99a0775a6c5d92f516fb233480824dc65f7c02451bcb-merged.mount: Deactivated successfully.
Nov 24 13:55:56 np0005533938 podman[293994]: 2025-11-24 18:55:56.63359796 +0000 UTC m=+1.177305207 container remove cfd1028f28380da0400705cc27e3fb2e43da58d42170f3568841c9f7c57e16d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 24 13:55:56 np0005533938 systemd[1]: libpod-conmon-cfd1028f28380da0400705cc27e3fb2e43da58d42170f3568841c9f7c57e16d5.scope: Deactivated successfully.
Nov 24 13:55:56 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:55:56 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:55:56 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:55:56 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:55:56 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 8ab42336-6663-4f64-aa09-65e1d1f03ce0 does not exist
Nov 24 13:55:56 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev af604f89-38b0-4ef0-b81d-132c4a982b2e does not exist
Nov 24 13:55:57 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:55:57 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:55:58 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1243: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:55:58 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:56:00 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1244: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:02 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:56:04 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1246: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:56:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:56:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:56:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:56:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:56:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:56:06 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1247: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:08 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:56:10 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1249: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:12 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1250: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:56:14 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1251: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:14 np0005533938 podman[294110]: 2025-11-24 18:56:14.963638545 +0000 UTC m=+0.059401624 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 24 13:56:14 np0005533938 podman[294112]: 2025-11-24 18:56:14.986615857 +0000 UTC m=+0.075062207 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Nov 24 13:56:14 np0005533938 podman[294111]: 2025-11-24 18:56:14.993583677 +0000 UTC m=+0.088923665 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 13:56:16 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:16 np0005533938 nova_compute[270693]: 2025-11-24 18:56:16.524 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:56:16 np0005533938 nova_compute[270693]: 2025-11-24 18:56:16.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:56:16 np0005533938 nova_compute[270693]: 2025-11-24 18:56:16.528 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 13:56:16 np0005533938 nova_compute[270693]: 2025-11-24 18:56:16.528 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 13:56:16 np0005533938 nova_compute[270693]: 2025-11-24 18:56:16.543 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 13:56:16 np0005533938 nova_compute[270693]: 2025-11-24 18:56:16.543 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:56:16 np0005533938 nova_compute[270693]: 2025-11-24 18:56:16.597 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:56:16 np0005533938 nova_compute[270693]: 2025-11-24 18:56:16.598 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:56:16 np0005533938 nova_compute[270693]: 2025-11-24 18:56:16.598 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:56:16 np0005533938 nova_compute[270693]: 2025-11-24 18:56:16.598 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 13:56:16 np0005533938 nova_compute[270693]: 2025-11-24 18:56:16.599 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:56:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:56:17 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1409027887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:56:17 np0005533938 nova_compute[270693]: 2025-11-24 18:56:17.083 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:56:17 np0005533938 nova_compute[270693]: 2025-11-24 18:56:17.227 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 13:56:17 np0005533938 nova_compute[270693]: 2025-11-24 18:56:17.228 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5000MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 13:56:17 np0005533938 nova_compute[270693]: 2025-11-24 18:56:17.228 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:56:17 np0005533938 nova_compute[270693]: 2025-11-24 18:56:17.228 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:56:17 np0005533938 nova_compute[270693]: 2025-11-24 18:56:17.330 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 13:56:17 np0005533938 nova_compute[270693]: 2025-11-24 18:56:17.330 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 13:56:17 np0005533938 nova_compute[270693]: 2025-11-24 18:56:17.348 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:56:17 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:56:17 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2244117392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:56:17 np0005533938 nova_compute[270693]: 2025-11-24 18:56:17.727 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.378s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:56:17 np0005533938 nova_compute[270693]: 2025-11-24 18:56:17.734 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 13:56:17 np0005533938 nova_compute[270693]: 2025-11-24 18:56:17.791 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 13:56:17 np0005533938 nova_compute[270693]: 2025-11-24 18:56:17.794 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 13:56:17 np0005533938 nova_compute[270693]: 2025-11-24 18:56:17.795 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:56:18 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1253: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:56:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 13:56:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3111280441' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 13:56:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 13:56:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3111280441' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 13:56:19 np0005533938 nova_compute[270693]: 2025-11-24 18:56:19.781 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:56:19 np0005533938 nova_compute[270693]: 2025-11-24 18:56:19.781 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 13:56:20 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1254: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:20 np0005533938 nova_compute[270693]: 2025-11-24 18:56:20.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:56:20 np0005533938 nova_compute[270693]: 2025-11-24 18:56:20.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:56:21 np0005533938 nova_compute[270693]: 2025-11-24 18:56:21.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:56:22 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1255: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:22 np0005533938 nova_compute[270693]: 2025-11-24 18:56:22.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:56:22 np0005533938 nova_compute[270693]: 2025-11-24 18:56:22.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:56:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:56:22.753 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:56:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:56:22.753 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:56:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:56:22.753 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:56:23 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:56:24 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1256: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:26 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1257: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:28 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1258: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:28 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:56:30 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:32 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1260: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:33 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:56:34 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:56:34
Nov 24 13:56:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:56:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:56:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'vms', 'volumes', '.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'backups', 'default.rgw.log']
Nov 24 13:56:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:56:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:56:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:56:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:56:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:56:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:56:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:56:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:56:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:56:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:56:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:56:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:56:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:56:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:56:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:56:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:56:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:56:36 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:38 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1263: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:56:40 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:42 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1265: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:56:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:56:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:56:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:56:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:56:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 13:56:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:56:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 13:56:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:56:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:56:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:56:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 13:56:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:56:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:56:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:56:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:56:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:56:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:56:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:56:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:56:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:56:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:56:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:56:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:56:44 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1266: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:45 np0005533938 podman[294214]: 2025-11-24 18:56:45.980635152 +0000 UTC m=+0.061085825 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 13:56:45 np0005533938 podman[294216]: 2025-11-24 18:56:45.995060235 +0000 UTC m=+0.069537951 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 24 13:56:46 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1267: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:46 np0005533938 podman[294215]: 2025-11-24 18:56:46.086953622 +0000 UTC m=+0.167265841 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 13:56:48 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:56:50 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:52 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:53 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:56:54 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1271: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:56 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1272: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:56:57 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:56:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:56:57 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:56:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:56:57 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:56:57 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 26caf26c-5d42-4711-b260-903c4385220f does not exist
Nov 24 13:56:57 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 5b15f244-b496-4d2b-8d13-75ccca904f55 does not exist
Nov 24 13:56:57 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 55a1ac89-de69-4f8e-8f04-b63cd3e5fd89 does not exist
Nov 24 13:56:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:56:57 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:56:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:56:57 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:56:57 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:56:57 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:56:57 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:56:57 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:56:57 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:56:58 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1273: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:56:58 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:56:58 np0005533938 podman[294543]: 2025-11-24 18:56:58.293453205 +0000 UTC m=+0.054612206 container create 1806a74497dd119a2b94f25f008829cdd0d05f5af12b04b7b9ef79a33ef03cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 24 13:56:58 np0005533938 systemd[1]: Started libpod-conmon-1806a74497dd119a2b94f25f008829cdd0d05f5af12b04b7b9ef79a33ef03cd6.scope.
Nov 24 13:56:58 np0005533938 podman[294543]: 2025-11-24 18:56:58.263565504 +0000 UTC m=+0.024724555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:56:58 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:56:58 np0005533938 podman[294543]: 2025-11-24 18:56:58.392296702 +0000 UTC m=+0.153455693 container init 1806a74497dd119a2b94f25f008829cdd0d05f5af12b04b7b9ef79a33ef03cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 13:56:58 np0005533938 podman[294543]: 2025-11-24 18:56:58.402327437 +0000 UTC m=+0.163486438 container start 1806a74497dd119a2b94f25f008829cdd0d05f5af12b04b7b9ef79a33ef03cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 24 13:56:58 np0005533938 podman[294543]: 2025-11-24 18:56:58.406298544 +0000 UTC m=+0.167457515 container attach 1806a74497dd119a2b94f25f008829cdd0d05f5af12b04b7b9ef79a33ef03cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ptolemy, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 13:56:58 np0005533938 youthful_ptolemy[294559]: 167 167
Nov 24 13:56:58 np0005533938 systemd[1]: libpod-1806a74497dd119a2b94f25f008829cdd0d05f5af12b04b7b9ef79a33ef03cd6.scope: Deactivated successfully.
Nov 24 13:56:58 np0005533938 conmon[294559]: conmon 1806a74497dd119a2b94 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1806a74497dd119a2b94f25f008829cdd0d05f5af12b04b7b9ef79a33ef03cd6.scope/container/memory.events
Nov 24 13:56:58 np0005533938 podman[294543]: 2025-11-24 18:56:58.410071156 +0000 UTC m=+0.171230197 container died 1806a74497dd119a2b94f25f008829cdd0d05f5af12b04b7b9ef79a33ef03cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ptolemy, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:56:58 np0005533938 systemd[1]: var-lib-containers-storage-overlay-fb94761c63dbbb3ab61b3c97da7d1fe794e755386a827e59372d06a26eae375e-merged.mount: Deactivated successfully.
Nov 24 13:56:58 np0005533938 podman[294543]: 2025-11-24 18:56:58.470599146 +0000 UTC m=+0.231758127 container remove 1806a74497dd119a2b94f25f008829cdd0d05f5af12b04b7b9ef79a33ef03cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ptolemy, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:56:58 np0005533938 systemd[1]: libpod-conmon-1806a74497dd119a2b94f25f008829cdd0d05f5af12b04b7b9ef79a33ef03cd6.scope: Deactivated successfully.
Nov 24 13:56:58 np0005533938 podman[294583]: 2025-11-24 18:56:58.676140362 +0000 UTC m=+0.049367948 container create 1ff1215dd890e92ef3e059459152bca0287414f04edbbd0ca4dea91108eb6ba0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:56:58 np0005533938 systemd[1]: Started libpod-conmon-1ff1215dd890e92ef3e059459152bca0287414f04edbbd0ca4dea91108eb6ba0.scope.
Nov 24 13:56:58 np0005533938 podman[294583]: 2025-11-24 18:56:58.649039759 +0000 UTC m=+0.022267325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:56:58 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:56:58 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8de8f1ab2ed77fe62e72ac136dc72011fb40db054f6de5139bbb8c7244724d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:56:58 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8de8f1ab2ed77fe62e72ac136dc72011fb40db054f6de5139bbb8c7244724d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:56:58 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8de8f1ab2ed77fe62e72ac136dc72011fb40db054f6de5139bbb8c7244724d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:56:58 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8de8f1ab2ed77fe62e72ac136dc72011fb40db054f6de5139bbb8c7244724d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:56:58 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8de8f1ab2ed77fe62e72ac136dc72011fb40db054f6de5139bbb8c7244724d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:56:58 np0005533938 podman[294583]: 2025-11-24 18:56:58.764597645 +0000 UTC m=+0.137825211 container init 1ff1215dd890e92ef3e059459152bca0287414f04edbbd0ca4dea91108eb6ba0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:56:58 np0005533938 podman[294583]: 2025-11-24 18:56:58.778005302 +0000 UTC m=+0.151232848 container start 1ff1215dd890e92ef3e059459152bca0287414f04edbbd0ca4dea91108eb6ba0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:56:58 np0005533938 podman[294583]: 2025-11-24 18:56:58.781318403 +0000 UTC m=+0.154546039 container attach 1ff1215dd890e92ef3e059459152bca0287414f04edbbd0ca4dea91108eb6ba0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 24 13:56:59 np0005533938 strange_ishizaka[294600]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:56:59 np0005533938 strange_ishizaka[294600]: --> relative data size: 1.0
Nov 24 13:56:59 np0005533938 strange_ishizaka[294600]: --> All data devices are unavailable
Nov 24 13:56:59 np0005533938 systemd[1]: libpod-1ff1215dd890e92ef3e059459152bca0287414f04edbbd0ca4dea91108eb6ba0.scope: Deactivated successfully.
Nov 24 13:56:59 np0005533938 podman[294583]: 2025-11-24 18:56:59.86664304 +0000 UTC m=+1.239870596 container died 1ff1215dd890e92ef3e059459152bca0287414f04edbbd0ca4dea91108eb6ba0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:56:59 np0005533938 systemd[1]: libpod-1ff1215dd890e92ef3e059459152bca0287414f04edbbd0ca4dea91108eb6ba0.scope: Consumed 1.041s CPU time.
Nov 24 13:56:59 np0005533938 systemd[1]: var-lib-containers-storage-overlay-ed8de8f1ab2ed77fe62e72ac136dc72011fb40db054f6de5139bbb8c7244724d-merged.mount: Deactivated successfully.
Nov 24 13:56:59 np0005533938 podman[294583]: 2025-11-24 18:56:59.931482236 +0000 UTC m=+1.304709782 container remove 1ff1215dd890e92ef3e059459152bca0287414f04edbbd0ca4dea91108eb6ba0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:56:59 np0005533938 systemd[1]: libpod-conmon-1ff1215dd890e92ef3e059459152bca0287414f04edbbd0ca4dea91108eb6ba0.scope: Deactivated successfully.
Nov 24 13:57:00 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1274: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:00 np0005533938 podman[294784]: 2025-11-24 18:57:00.567822555 +0000 UTC m=+0.050722401 container create 9a5c97dd8474994ad1dd68f557804308879b512b647348cedcb53f006a221c72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 24 13:57:00 np0005533938 systemd[1]: Started libpod-conmon-9a5c97dd8474994ad1dd68f557804308879b512b647348cedcb53f006a221c72.scope.
Nov 24 13:57:00 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:57:00 np0005533938 podman[294784]: 2025-11-24 18:57:00.541318327 +0000 UTC m=+0.024218213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:57:00 np0005533938 podman[294784]: 2025-11-24 18:57:00.986132893 +0000 UTC m=+0.469032799 container init 9a5c97dd8474994ad1dd68f557804308879b512b647348cedcb53f006a221c72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_elgamal, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:57:00 np0005533938 podman[294784]: 2025-11-24 18:57:00.992178071 +0000 UTC m=+0.475077897 container start 9a5c97dd8474994ad1dd68f557804308879b512b647348cedcb53f006a221c72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:57:00 np0005533938 podman[294784]: 2025-11-24 18:57:00.996096316 +0000 UTC m=+0.478996182 container attach 9a5c97dd8474994ad1dd68f557804308879b512b647348cedcb53f006a221c72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 24 13:57:00 np0005533938 objective_elgamal[294800]: 167 167
Nov 24 13:57:00 np0005533938 systemd[1]: libpod-9a5c97dd8474994ad1dd68f557804308879b512b647348cedcb53f006a221c72.scope: Deactivated successfully.
Nov 24 13:57:00 np0005533938 podman[294784]: 2025-11-24 18:57:00.998254569 +0000 UTC m=+0.481154405 container died 9a5c97dd8474994ad1dd68f557804308879b512b647348cedcb53f006a221c72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_elgamal, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 24 13:57:01 np0005533938 systemd[1]: var-lib-containers-storage-overlay-dd185a1a4c8602d79f09a797d66e67616dbcf431b577cffa42e311c1c1e01e98-merged.mount: Deactivated successfully.
Nov 24 13:57:01 np0005533938 podman[294784]: 2025-11-24 18:57:01.038614846 +0000 UTC m=+0.521514692 container remove 9a5c97dd8474994ad1dd68f557804308879b512b647348cedcb53f006a221c72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_elgamal, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 24 13:57:01 np0005533938 systemd[1]: libpod-conmon-9a5c97dd8474994ad1dd68f557804308879b512b647348cedcb53f006a221c72.scope: Deactivated successfully.
Nov 24 13:57:01 np0005533938 podman[294824]: 2025-11-24 18:57:01.230782435 +0000 UTC m=+0.045744220 container create 4948e9077cd988637ee15a4227275663b0d194fe7d1154b9578dd36090913a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_perlman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:57:01 np0005533938 systemd[1]: Started libpod-conmon-4948e9077cd988637ee15a4227275663b0d194fe7d1154b9578dd36090913a67.scope.
Nov 24 13:57:01 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:57:01 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41650be5d281706ec61ee3667db3785801f2c9118c500de8584d4a32745a0038/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:57:01 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41650be5d281706ec61ee3667db3785801f2c9118c500de8584d4a32745a0038/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:57:01 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41650be5d281706ec61ee3667db3785801f2c9118c500de8584d4a32745a0038/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:57:01 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41650be5d281706ec61ee3667db3785801f2c9118c500de8584d4a32745a0038/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:57:01 np0005533938 podman[294824]: 2025-11-24 18:57:01.212456567 +0000 UTC m=+0.027418362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:57:01 np0005533938 podman[294824]: 2025-11-24 18:57:01.318143991 +0000 UTC m=+0.133105776 container init 4948e9077cd988637ee15a4227275663b0d194fe7d1154b9578dd36090913a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_perlman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:57:01 np0005533938 podman[294824]: 2025-11-24 18:57:01.335398713 +0000 UTC m=+0.150360498 container start 4948e9077cd988637ee15a4227275663b0d194fe7d1154b9578dd36090913a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_perlman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 24 13:57:01 np0005533938 podman[294824]: 2025-11-24 18:57:01.340039136 +0000 UTC m=+0.155000921 container attach 4948e9077cd988637ee15a4227275663b0d194fe7d1154b9578dd36090913a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:57:02 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1275: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:02 np0005533938 magical_perlman[294840]: {
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:    "0": [
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:        {
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "devices": [
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "/dev/loop3"
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            ],
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "lv_name": "ceph_lv0",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "lv_size": "21470642176",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "name": "ceph_lv0",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "tags": {
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.cluster_name": "ceph",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.crush_device_class": "",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.encrypted": "0",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.osd_id": "0",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.type": "block",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.vdo": "0"
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            },
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "type": "block",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "vg_name": "ceph_vg0"
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:        }
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:    ],
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:    "1": [
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:        {
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "devices": [
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "/dev/loop4"
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            ],
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "lv_name": "ceph_lv1",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "lv_size": "21470642176",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "name": "ceph_lv1",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "tags": {
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.cluster_name": "ceph",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.crush_device_class": "",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.encrypted": "0",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.osd_id": "1",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.type": "block",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.vdo": "0"
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            },
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "type": "block",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "vg_name": "ceph_vg1"
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:        }
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:    ],
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:    "2": [
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:        {
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "devices": [
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "/dev/loop5"
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            ],
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "lv_name": "ceph_lv2",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "lv_size": "21470642176",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "name": "ceph_lv2",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "tags": {
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.cluster_name": "ceph",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.crush_device_class": "",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.encrypted": "0",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.osd_id": "2",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.type": "block",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:                "ceph.vdo": "0"
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            },
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "type": "block",
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:            "vg_name": "ceph_vg2"
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:        }
Nov 24 13:57:02 np0005533938 magical_perlman[294840]:    ]
Nov 24 13:57:02 np0005533938 magical_perlman[294840]: }
Nov 24 13:57:02 np0005533938 systemd[1]: libpod-4948e9077cd988637ee15a4227275663b0d194fe7d1154b9578dd36090913a67.scope: Deactivated successfully.
Nov 24 13:57:02 np0005533938 podman[294824]: 2025-11-24 18:57:02.10379965 +0000 UTC m=+0.918761465 container died 4948e9077cd988637ee15a4227275663b0d194fe7d1154b9578dd36090913a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_perlman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 24 13:57:02 np0005533938 systemd[1]: var-lib-containers-storage-overlay-41650be5d281706ec61ee3667db3785801f2c9118c500de8584d4a32745a0038-merged.mount: Deactivated successfully.
Nov 24 13:57:02 np0005533938 podman[294824]: 2025-11-24 18:57:02.210945779 +0000 UTC m=+1.025907574 container remove 4948e9077cd988637ee15a4227275663b0d194fe7d1154b9578dd36090913a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 13:57:02 np0005533938 systemd[1]: libpod-conmon-4948e9077cd988637ee15a4227275663b0d194fe7d1154b9578dd36090913a67.scope: Deactivated successfully.
Nov 24 13:57:03 np0005533938 podman[295003]: 2025-11-24 18:57:03.055351876 +0000 UTC m=+0.064847757 container create c103adeab67b54c1dd91d99824c216cbdfdcad1df840de4a1c2258f4ef345e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_feynman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:57:03 np0005533938 systemd[1]: Started libpod-conmon-c103adeab67b54c1dd91d99824c216cbdfdcad1df840de4a1c2258f4ef345e26.scope.
Nov 24 13:57:03 np0005533938 podman[295003]: 2025-11-24 18:57:03.031579764 +0000 UTC m=+0.041075685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:57:03 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:57:03 np0005533938 podman[295003]: 2025-11-24 18:57:03.15943484 +0000 UTC m=+0.168930761 container init c103adeab67b54c1dd91d99824c216cbdfdcad1df840de4a1c2258f4ef345e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_feynman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 24 13:57:03 np0005533938 podman[295003]: 2025-11-24 18:57:03.171844904 +0000 UTC m=+0.181340785 container start c103adeab67b54c1dd91d99824c216cbdfdcad1df840de4a1c2258f4ef345e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_feynman, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:57:03 np0005533938 podman[295003]: 2025-11-24 18:57:03.176246222 +0000 UTC m=+0.185742153 container attach c103adeab67b54c1dd91d99824c216cbdfdcad1df840de4a1c2258f4ef345e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 24 13:57:03 np0005533938 condescending_feynman[295019]: 167 167
Nov 24 13:57:03 np0005533938 systemd[1]: libpod-c103adeab67b54c1dd91d99824c216cbdfdcad1df840de4a1c2258f4ef345e26.scope: Deactivated successfully.
Nov 24 13:57:03 np0005533938 conmon[295019]: conmon c103adeab67b54c1dd91 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c103adeab67b54c1dd91d99824c216cbdfdcad1df840de4a1c2258f4ef345e26.scope/container/memory.events
Nov 24 13:57:03 np0005533938 podman[295003]: 2025-11-24 18:57:03.182388722 +0000 UTC m=+0.191884623 container died c103adeab67b54c1dd91d99824c216cbdfdcad1df840de4a1c2258f4ef345e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_feynman, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 24 13:57:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:57:03 np0005533938 systemd[1]: var-lib-containers-storage-overlay-982671a702dcdc401bc785a2d4ae74e8a49765446b2170b9e214a092be621479-merged.mount: Deactivated successfully.
Nov 24 13:57:03 np0005533938 podman[295003]: 2025-11-24 18:57:03.239697653 +0000 UTC m=+0.249193534 container remove c103adeab67b54c1dd91d99824c216cbdfdcad1df840de4a1c2258f4ef345e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_feynman, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:57:03 np0005533938 systemd[1]: libpod-conmon-c103adeab67b54c1dd91d99824c216cbdfdcad1df840de4a1c2258f4ef345e26.scope: Deactivated successfully.
Nov 24 13:57:03 np0005533938 podman[295044]: 2025-11-24 18:57:03.48536154 +0000 UTC m=+0.064538889 container create 1238047ad75a488c9a3313e6c875b7719b1f97a82bc261e948a7ab5e93d6a135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 24 13:57:03 np0005533938 systemd[1]: Started libpod-conmon-1238047ad75a488c9a3313e6c875b7719b1f97a82bc261e948a7ab5e93d6a135.scope.
Nov 24 13:57:03 np0005533938 podman[295044]: 2025-11-24 18:57:03.454481325 +0000 UTC m=+0.033658744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:57:03 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:57:03 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1f353d1a6b0ea5f63c562e2c9920084798b89268deead9cefcc521238cfb8a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:57:03 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1f353d1a6b0ea5f63c562e2c9920084798b89268deead9cefcc521238cfb8a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:57:03 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1f353d1a6b0ea5f63c562e2c9920084798b89268deead9cefcc521238cfb8a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:57:03 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1f353d1a6b0ea5f63c562e2c9920084798b89268deead9cefcc521238cfb8a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:57:03 np0005533938 podman[295044]: 2025-11-24 18:57:03.597671416 +0000 UTC m=+0.176848835 container init 1238047ad75a488c9a3313e6c875b7719b1f97a82bc261e948a7ab5e93d6a135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:57:03 np0005533938 podman[295044]: 2025-11-24 18:57:03.61298954 +0000 UTC m=+0.192166919 container start 1238047ad75a488c9a3313e6c875b7719b1f97a82bc261e948a7ab5e93d6a135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:57:03 np0005533938 podman[295044]: 2025-11-24 18:57:03.617030409 +0000 UTC m=+0.196207788 container attach 1238047ad75a488c9a3313e6c875b7719b1f97a82bc261e948a7ab5e93d6a135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:57:04 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1276: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:04 np0005533938 charming_shaw[295060]: {
Nov 24 13:57:04 np0005533938 charming_shaw[295060]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:57:04 np0005533938 charming_shaw[295060]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:57:04 np0005533938 charming_shaw[295060]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:57:04 np0005533938 charming_shaw[295060]:        "osd_id": 0,
Nov 24 13:57:04 np0005533938 charming_shaw[295060]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:57:04 np0005533938 charming_shaw[295060]:        "type": "bluestore"
Nov 24 13:57:04 np0005533938 charming_shaw[295060]:    },
Nov 24 13:57:04 np0005533938 charming_shaw[295060]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:57:04 np0005533938 charming_shaw[295060]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:57:04 np0005533938 charming_shaw[295060]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:57:04 np0005533938 charming_shaw[295060]:        "osd_id": 1,
Nov 24 13:57:04 np0005533938 charming_shaw[295060]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:57:04 np0005533938 charming_shaw[295060]:        "type": "bluestore"
Nov 24 13:57:04 np0005533938 charming_shaw[295060]:    },
Nov 24 13:57:04 np0005533938 charming_shaw[295060]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:57:04 np0005533938 charming_shaw[295060]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:57:04 np0005533938 charming_shaw[295060]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:57:04 np0005533938 charming_shaw[295060]:        "osd_id": 2,
Nov 24 13:57:04 np0005533938 charming_shaw[295060]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:57:04 np0005533938 charming_shaw[295060]:        "type": "bluestore"
Nov 24 13:57:04 np0005533938 charming_shaw[295060]:    }
Nov 24 13:57:04 np0005533938 charming_shaw[295060]: }
Nov 24 13:57:04 np0005533938 systemd[1]: libpod-1238047ad75a488c9a3313e6c875b7719b1f97a82bc261e948a7ab5e93d6a135.scope: Deactivated successfully.
Nov 24 13:57:04 np0005533938 podman[295044]: 2025-11-24 18:57:04.594659293 +0000 UTC m=+1.173836632 container died 1238047ad75a488c9a3313e6c875b7719b1f97a82bc261e948a7ab5e93d6a135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:57:04 np0005533938 systemd[1]: var-lib-containers-storage-overlay-c1f353d1a6b0ea5f63c562e2c9920084798b89268deead9cefcc521238cfb8a1-merged.mount: Deactivated successfully.
Nov 24 13:57:04 np0005533938 podman[295044]: 2025-11-24 18:57:04.645168338 +0000 UTC m=+1.224345687 container remove 1238047ad75a488c9a3313e6c875b7719b1f97a82bc261e948a7ab5e93d6a135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 24 13:57:04 np0005533938 systemd[1]: libpod-conmon-1238047ad75a488c9a3313e6c875b7719b1f97a82bc261e948a7ab5e93d6a135.scope: Deactivated successfully.
Nov 24 13:57:04 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:57:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:57:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:57:04 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:57:04 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:57:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:57:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:57:04 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:57:04 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 3d8b4074-ca96-4491-a33a-e57db0c9f175 does not exist
Nov 24 13:57:04 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 0455ed5a-9d2a-45a8-9673-956cc51062b6 does not exist
Nov 24 13:57:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:57:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:57:05 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:57:05 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:57:06 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1277: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:08 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1278: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:57:10 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1279: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:12 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1280: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:57:14 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1281: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:16 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:16 np0005533938 nova_compute[270693]: 2025-11-24 18:57:16.524 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:57:16 np0005533938 podman[295157]: 2025-11-24 18:57:16.998864662 +0000 UTC m=+0.070501875 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Nov 24 13:57:17 np0005533938 podman[295155]: 2025-11-24 18:57:16.99999954 +0000 UTC m=+0.081510364 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 24 13:57:17 np0005533938 podman[295156]: 2025-11-24 18:57:17.035755034 +0000 UTC m=+0.123534192 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 24 13:57:17 np0005533938 nova_compute[270693]: 2025-11-24 18:57:17.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:57:17 np0005533938 nova_compute[270693]: 2025-11-24 18:57:17.530 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 13:57:17 np0005533938 nova_compute[270693]: 2025-11-24 18:57:17.530 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 13:57:17 np0005533938 nova_compute[270693]: 2025-11-24 18:57:17.545 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 13:57:18 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1283: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:57:18 np0005533938 nova_compute[270693]: 2025-11-24 18:57:18.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:57:18 np0005533938 nova_compute[270693]: 2025-11-24 18:57:18.580 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:57:18 np0005533938 nova_compute[270693]: 2025-11-24 18:57:18.581 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:57:18 np0005533938 nova_compute[270693]: 2025-11-24 18:57:18.581 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:57:18 np0005533938 nova_compute[270693]: 2025-11-24 18:57:18.581 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 13:57:18 np0005533938 nova_compute[270693]: 2025-11-24 18:57:18.581 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:57:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:57:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/223273830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:57:19 np0005533938 nova_compute[270693]: 2025-11-24 18:57:19.043 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:57:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 13:57:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4117175616' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 13:57:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 13:57:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4117175616' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 13:57:19 np0005533938 nova_compute[270693]: 2025-11-24 18:57:19.219 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 13:57:19 np0005533938 nova_compute[270693]: 2025-11-24 18:57:19.220 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5011MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 13:57:19 np0005533938 nova_compute[270693]: 2025-11-24 18:57:19.220 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:57:19 np0005533938 nova_compute[270693]: 2025-11-24 18:57:19.221 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:57:19 np0005533938 nova_compute[270693]: 2025-11-24 18:57:19.318 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 13:57:19 np0005533938 nova_compute[270693]: 2025-11-24 18:57:19.318 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 13:57:19 np0005533938 nova_compute[270693]: 2025-11-24 18:57:19.365 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:57:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:57:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2532253814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:57:19 np0005533938 nova_compute[270693]: 2025-11-24 18:57:19.807 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:57:19 np0005533938 nova_compute[270693]: 2025-11-24 18:57:19.813 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 13:57:19 np0005533938 nova_compute[270693]: 2025-11-24 18:57:19.840 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 13:57:19 np0005533938 nova_compute[270693]: 2025-11-24 18:57:19.841 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 13:57:19 np0005533938 nova_compute[270693]: 2025-11-24 18:57:19.841 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:57:20 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1284: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:21 np0005533938 nova_compute[270693]: 2025-11-24 18:57:21.838 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:57:21 np0005533938 nova_compute[270693]: 2025-11-24 18:57:21.856 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:57:21 np0005533938 nova_compute[270693]: 2025-11-24 18:57:21.857 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:57:21 np0005533938 nova_compute[270693]: 2025-11-24 18:57:21.857 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 13:57:22 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1285: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:22 np0005533938 nova_compute[270693]: 2025-11-24 18:57:22.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:57:22 np0005533938 nova_compute[270693]: 2025-11-24 18:57:22.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:57:22 np0005533938 nova_compute[270693]: 2025-11-24 18:57:22.530 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:57:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:57:22.754 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:57:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:57:22.755 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:57:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:57:22.755 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:57:23 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:57:24 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1286: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:24 np0005533938 nova_compute[270693]: 2025-11-24 18:57:24.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:57:26 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1287: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:28 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1288: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:28 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:57:30 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1289: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:32 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1290: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:33 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:57:34 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1291: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:57:34
Nov 24 13:57:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:57:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:57:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'default.rgw.meta', '.rgw.root', 'vms', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'backups', 'default.rgw.log']
Nov 24 13:57:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:57:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:57:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:57:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:57:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:57:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:57:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:57:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:57:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:57:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:57:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:57:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:57:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:57:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:57:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:57:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:57:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:57:36 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1292: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:38 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:57:40 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1294: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:42 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:57:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:57:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:57:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:57:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:57:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 13:57:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:57:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 13:57:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:57:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:57:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:57:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 13:57:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:57:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:57:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:57:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:57:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:57:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:57:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:57:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:57:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:57:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:57:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:57:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:57:44 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1296: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:46 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1297: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:47 np0005533938 podman[295262]: 2025-11-24 18:57:47.973584158 +0000 UTC m=+0.060229363 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 24 13:57:47 np0005533938 podman[295260]: 2025-11-24 18:57:47.992432609 +0000 UTC m=+0.069928241 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 24 13:57:48 np0005533938 podman[295261]: 2025-11-24 18:57:48.019817919 +0000 UTC m=+0.106433183 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 24 13:57:48 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1298: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:57:50 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1299: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:52 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1300: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:53 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:57:54 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1301: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:56 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1302: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:58 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1303: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:57:58 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:58:00 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:02 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1305: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:58:04 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1306: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:58:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:58:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:58:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:58:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:58:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:58:05 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:58:05 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:58:05 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:58:05 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:58:05 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:58:05 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:58:05 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 5ecfe9f6-7272-44da-a186-8edc7dd77885 does not exist
Nov 24 13:58:05 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 41f12385-01f5-4023-a589-89c263714029 does not exist
Nov 24 13:58:05 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 5747bc10-5f64-405d-9c84-08ec96afeb9f does not exist
Nov 24 13:58:05 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:58:05 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:58:05 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:58:05 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:58:05 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:58:05 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:58:06 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1307: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:06 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:58:06 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:58:06 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:58:06 np0005533938 podman[295595]: 2025-11-24 18:58:06.469180214 +0000 UTC m=+0.053423098 container create 82228a4818fe2813dc9ff9c2772a99b4c65739a2e2eb7ea290907d5cb9886036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 24 13:58:06 np0005533938 systemd[1]: Started libpod-conmon-82228a4818fe2813dc9ff9c2772a99b4c65739a2e2eb7ea290907d5cb9886036.scope.
Nov 24 13:58:06 np0005533938 podman[295595]: 2025-11-24 18:58:06.442234245 +0000 UTC m=+0.026477169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:58:06 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:58:06 np0005533938 podman[295595]: 2025-11-24 18:58:06.574100569 +0000 UTC m=+0.158343503 container init 82228a4818fe2813dc9ff9c2772a99b4c65739a2e2eb7ea290907d5cb9886036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 24 13:58:06 np0005533938 podman[295595]: 2025-11-24 18:58:06.585951429 +0000 UTC m=+0.170194313 container start 82228a4818fe2813dc9ff9c2772a99b4c65739a2e2eb7ea290907d5cb9886036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wright, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:58:06 np0005533938 podman[295595]: 2025-11-24 18:58:06.589863255 +0000 UTC m=+0.174106099 container attach 82228a4818fe2813dc9ff9c2772a99b4c65739a2e2eb7ea290907d5cb9886036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wright, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:58:06 np0005533938 distracted_wright[295612]: 167 167
Nov 24 13:58:06 np0005533938 systemd[1]: libpod-82228a4818fe2813dc9ff9c2772a99b4c65739a2e2eb7ea290907d5cb9886036.scope: Deactivated successfully.
Nov 24 13:58:06 np0005533938 podman[295595]: 2025-11-24 18:58:06.594201111 +0000 UTC m=+0.178443955 container died 82228a4818fe2813dc9ff9c2772a99b4c65739a2e2eb7ea290907d5cb9886036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wright, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:58:06 np0005533938 systemd[1]: var-lib-containers-storage-overlay-5cfb24ce46df06484255380deac1e3d43134a9739ebf71b04a10c32837d5932c-merged.mount: Deactivated successfully.
Nov 24 13:58:06 np0005533938 podman[295595]: 2025-11-24 18:58:06.632008555 +0000 UTC m=+0.216251399 container remove 82228a4818fe2813dc9ff9c2772a99b4c65739a2e2eb7ea290907d5cb9886036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 24 13:58:06 np0005533938 systemd[1]: libpod-conmon-82228a4818fe2813dc9ff9c2772a99b4c65739a2e2eb7ea290907d5cb9886036.scope: Deactivated successfully.
Nov 24 13:58:06 np0005533938 podman[295636]: 2025-11-24 18:58:06.82157599 +0000 UTC m=+0.065665587 container create bd01bb15661836e7d3350ce0c5b17e58ca1c7ccb7dda5672c12139b3fcc6dea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 24 13:58:06 np0005533938 systemd[1]: Started libpod-conmon-bd01bb15661836e7d3350ce0c5b17e58ca1c7ccb7dda5672c12139b3fcc6dea1.scope.
Nov 24 13:58:06 np0005533938 podman[295636]: 2025-11-24 18:58:06.795301388 +0000 UTC m=+0.039391005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:58:06 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:58:06 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed78ca1f382d8c36b414b0edfa1229495cd156d84fd47bd7ab374ebefa3f8fa5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:58:06 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed78ca1f382d8c36b414b0edfa1229495cd156d84fd47bd7ab374ebefa3f8fa5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:58:06 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed78ca1f382d8c36b414b0edfa1229495cd156d84fd47bd7ab374ebefa3f8fa5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:58:06 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed78ca1f382d8c36b414b0edfa1229495cd156d84fd47bd7ab374ebefa3f8fa5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:58:06 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed78ca1f382d8c36b414b0edfa1229495cd156d84fd47bd7ab374ebefa3f8fa5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:58:06 np0005533938 podman[295636]: 2025-11-24 18:58:06.924726612 +0000 UTC m=+0.168816249 container init bd01bb15661836e7d3350ce0c5b17e58ca1c7ccb7dda5672c12139b3fcc6dea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ptolemy, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 24 13:58:06 np0005533938 podman[295636]: 2025-11-24 18:58:06.941937003 +0000 UTC m=+0.186026580 container start bd01bb15661836e7d3350ce0c5b17e58ca1c7ccb7dda5672c12139b3fcc6dea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ptolemy, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 24 13:58:06 np0005533938 podman[295636]: 2025-11-24 18:58:06.94547057 +0000 UTC m=+0.189560147 container attach bd01bb15661836e7d3350ce0c5b17e58ca1c7ccb7dda5672c12139b3fcc6dea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 24 13:58:08 np0005533938 keen_ptolemy[295652]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:58:08 np0005533938 keen_ptolemy[295652]: --> relative data size: 1.0
Nov 24 13:58:08 np0005533938 keen_ptolemy[295652]: --> All data devices are unavailable
Nov 24 13:58:08 np0005533938 systemd[1]: libpod-bd01bb15661836e7d3350ce0c5b17e58ca1c7ccb7dda5672c12139b3fcc6dea1.scope: Deactivated successfully.
Nov 24 13:58:08 np0005533938 systemd[1]: libpod-bd01bb15661836e7d3350ce0c5b17e58ca1c7ccb7dda5672c12139b3fcc6dea1.scope: Consumed 1.019s CPU time.
Nov 24 13:58:08 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:08 np0005533938 podman[295681]: 2025-11-24 18:58:08.06078048 +0000 UTC m=+0.025194577 container died bd01bb15661836e7d3350ce0c5b17e58ca1c7ccb7dda5672c12139b3fcc6dea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ptolemy, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 24 13:58:08 np0005533938 systemd[1]: var-lib-containers-storage-overlay-ed78ca1f382d8c36b414b0edfa1229495cd156d84fd47bd7ab374ebefa3f8fa5-merged.mount: Deactivated successfully.
Nov 24 13:58:08 np0005533938 podman[295681]: 2025-11-24 18:58:08.116919602 +0000 UTC m=+0.081333669 container remove bd01bb15661836e7d3350ce0c5b17e58ca1c7ccb7dda5672c12139b3fcc6dea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 24 13:58:08 np0005533938 systemd[1]: libpod-conmon-bd01bb15661836e7d3350ce0c5b17e58ca1c7ccb7dda5672c12139b3fcc6dea1.scope: Deactivated successfully.
Nov 24 13:58:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:58:08 np0005533938 podman[295836]: 2025-11-24 18:58:08.901303941 +0000 UTC m=+0.037927828 container create 56efcc867fef987a955c283b52bc70867fd0f53977485940bc2414392b1883e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lovelace, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:58:08 np0005533938 systemd[1]: Started libpod-conmon-56efcc867fef987a955c283b52bc70867fd0f53977485940bc2414392b1883e0.scope.
Nov 24 13:58:08 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:58:08 np0005533938 podman[295836]: 2025-11-24 18:58:08.88613102 +0000 UTC m=+0.022754927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:58:08 np0005533938 podman[295836]: 2025-11-24 18:58:08.988518533 +0000 UTC m=+0.125142430 container init 56efcc867fef987a955c283b52bc70867fd0f53977485940bc2414392b1883e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lovelace, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:58:08 np0005533938 podman[295836]: 2025-11-24 18:58:08.998571119 +0000 UTC m=+0.135195006 container start 56efcc867fef987a955c283b52bc70867fd0f53977485940bc2414392b1883e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 24 13:58:09 np0005533938 podman[295836]: 2025-11-24 18:58:09.001652575 +0000 UTC m=+0.138276482 container attach 56efcc867fef987a955c283b52bc70867fd0f53977485940bc2414392b1883e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 24 13:58:09 np0005533938 gifted_lovelace[295852]: 167 167
Nov 24 13:58:09 np0005533938 systemd[1]: libpod-56efcc867fef987a955c283b52bc70867fd0f53977485940bc2414392b1883e0.scope: Deactivated successfully.
Nov 24 13:58:09 np0005533938 podman[295836]: 2025-11-24 18:58:09.002672009 +0000 UTC m=+0.139295896 container died 56efcc867fef987a955c283b52bc70867fd0f53977485940bc2414392b1883e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lovelace, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:58:09 np0005533938 systemd[1]: var-lib-containers-storage-overlay-6a6865d003bff5fb427d29a34c7bd7df59857945c5a23d819011234f14e6586f-merged.mount: Deactivated successfully.
Nov 24 13:58:09 np0005533938 podman[295836]: 2025-11-24 18:58:09.047461505 +0000 UTC m=+0.184085422 container remove 56efcc867fef987a955c283b52bc70867fd0f53977485940bc2414392b1883e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lovelace, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:58:09 np0005533938 systemd[1]: libpod-conmon-56efcc867fef987a955c283b52bc70867fd0f53977485940bc2414392b1883e0.scope: Deactivated successfully.
Nov 24 13:58:09 np0005533938 podman[295877]: 2025-11-24 18:58:09.213951405 +0000 UTC m=+0.036114864 container create fb3bc03dfb93ccfeceec76eb01548db170610cd8d4da212f436152f551d0f746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williams, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:58:09 np0005533938 systemd[1]: Started libpod-conmon-fb3bc03dfb93ccfeceec76eb01548db170610cd8d4da212f436152f551d0f746.scope.
Nov 24 13:58:09 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:58:09 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62a836b02d5f19e0007a17438b1fd069a204ff9c1fdd6accd6326280c8e1f7be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:58:09 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62a836b02d5f19e0007a17438b1fd069a204ff9c1fdd6accd6326280c8e1f7be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:58:09 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62a836b02d5f19e0007a17438b1fd069a204ff9c1fdd6accd6326280c8e1f7be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:58:09 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62a836b02d5f19e0007a17438b1fd069a204ff9c1fdd6accd6326280c8e1f7be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:58:09 np0005533938 podman[295877]: 2025-11-24 18:58:09.293364827 +0000 UTC m=+0.115528286 container init fb3bc03dfb93ccfeceec76eb01548db170610cd8d4da212f436152f551d0f746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williams, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:58:09 np0005533938 podman[295877]: 2025-11-24 18:58:09.19940738 +0000 UTC m=+0.021570859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:58:09 np0005533938 podman[295877]: 2025-11-24 18:58:09.31024423 +0000 UTC m=+0.132407689 container start fb3bc03dfb93ccfeceec76eb01548db170610cd8d4da212f436152f551d0f746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williams, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:58:09 np0005533938 podman[295877]: 2025-11-24 18:58:09.313268424 +0000 UTC m=+0.135431883 container attach fb3bc03dfb93ccfeceec76eb01548db170610cd8d4da212f436152f551d0f746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williams, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]: {
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:    "0": [
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:        {
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "devices": [
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "/dev/loop3"
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            ],
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "lv_name": "ceph_lv0",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "lv_size": "21470642176",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "name": "ceph_lv0",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "tags": {
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.cluster_name": "ceph",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.crush_device_class": "",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.encrypted": "0",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.osd_id": "0",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.type": "block",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.vdo": "0"
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            },
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "type": "block",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "vg_name": "ceph_vg0"
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:        }
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:    ],
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:    "1": [
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:        {
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "devices": [
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "/dev/loop4"
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            ],
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "lv_name": "ceph_lv1",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "lv_size": "21470642176",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "name": "ceph_lv1",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "tags": {
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.cluster_name": "ceph",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.crush_device_class": "",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.encrypted": "0",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.osd_id": "1",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.type": "block",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.vdo": "0"
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            },
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "type": "block",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "vg_name": "ceph_vg1"
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:        }
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:    ],
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:    "2": [
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:        {
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "devices": [
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "/dev/loop5"
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            ],
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "lv_name": "ceph_lv2",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "lv_size": "21470642176",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "name": "ceph_lv2",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "tags": {
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.cluster_name": "ceph",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.crush_device_class": "",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.encrypted": "0",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.osd_id": "2",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.type": "block",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:                "ceph.vdo": "0"
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            },
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "type": "block",
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:            "vg_name": "ceph_vg2"
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:        }
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]:    ]
Nov 24 13:58:10 np0005533938 peaceful_williams[295894]: }
Nov 24 13:58:10 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1309: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:10 np0005533938 systemd[1]: libpod-fb3bc03dfb93ccfeceec76eb01548db170610cd8d4da212f436152f551d0f746.scope: Deactivated successfully.
Nov 24 13:58:10 np0005533938 podman[295877]: 2025-11-24 18:58:10.081427765 +0000 UTC m=+0.903591244 container died fb3bc03dfb93ccfeceec76eb01548db170610cd8d4da212f436152f551d0f746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:58:10 np0005533938 systemd[1]: var-lib-containers-storage-overlay-62a836b02d5f19e0007a17438b1fd069a204ff9c1fdd6accd6326280c8e1f7be-merged.mount: Deactivated successfully.
Nov 24 13:58:10 np0005533938 podman[295877]: 2025-11-24 18:58:10.146876845 +0000 UTC m=+0.969040314 container remove fb3bc03dfb93ccfeceec76eb01548db170610cd8d4da212f436152f551d0f746 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williams, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 24 13:58:10 np0005533938 systemd[1]: libpod-conmon-fb3bc03dfb93ccfeceec76eb01548db170610cd8d4da212f436152f551d0f746.scope: Deactivated successfully.
Nov 24 13:58:10 np0005533938 podman[296055]: 2025-11-24 18:58:10.7196594 +0000 UTC m=+0.039344633 container create 4581613fec3c8f9c9b7b510331e2f871ddd96bb52748fe696a2d513d6903e0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hopper, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Nov 24 13:58:10 np0005533938 systemd[1]: Started libpod-conmon-4581613fec3c8f9c9b7b510331e2f871ddd96bb52748fe696a2d513d6903e0ca.scope.
Nov 24 13:58:10 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:58:10 np0005533938 podman[296055]: 2025-11-24 18:58:10.69799304 +0000 UTC m=+0.017678273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:58:10 np0005533938 podman[296055]: 2025-11-24 18:58:10.796596041 +0000 UTC m=+0.116281304 container init 4581613fec3c8f9c9b7b510331e2f871ddd96bb52748fe696a2d513d6903e0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hopper, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:58:10 np0005533938 podman[296055]: 2025-11-24 18:58:10.802657349 +0000 UTC m=+0.122342582 container start 4581613fec3c8f9c9b7b510331e2f871ddd96bb52748fe696a2d513d6903e0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:58:10 np0005533938 podman[296055]: 2025-11-24 18:58:10.805709554 +0000 UTC m=+0.125394847 container attach 4581613fec3c8f9c9b7b510331e2f871ddd96bb52748fe696a2d513d6903e0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hopper, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:58:10 np0005533938 strange_hopper[296071]: 167 167
Nov 24 13:58:10 np0005533938 systemd[1]: libpod-4581613fec3c8f9c9b7b510331e2f871ddd96bb52748fe696a2d513d6903e0ca.scope: Deactivated successfully.
Nov 24 13:58:10 np0005533938 podman[296055]: 2025-11-24 18:58:10.809409365 +0000 UTC m=+0.129094638 container died 4581613fec3c8f9c9b7b510331e2f871ddd96bb52748fe696a2d513d6903e0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:58:10 np0005533938 systemd[1]: var-lib-containers-storage-overlay-6ce32cbc2e4064baeb6ce532c9c04b359a1b82583794a41c86409fbe077a468a-merged.mount: Deactivated successfully.
Nov 24 13:58:10 np0005533938 podman[296055]: 2025-11-24 18:58:10.847369513 +0000 UTC m=+0.167054746 container remove 4581613fec3c8f9c9b7b510331e2f871ddd96bb52748fe696a2d513d6903e0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hopper, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:58:10 np0005533938 systemd[1]: libpod-conmon-4581613fec3c8f9c9b7b510331e2f871ddd96bb52748fe696a2d513d6903e0ca.scope: Deactivated successfully.
Nov 24 13:58:11 np0005533938 podman[296094]: 2025-11-24 18:58:11.006028122 +0000 UTC m=+0.038054401 container create c90fa050e9e7bda91466fe7d4df5924fd861fa86cc1042864eea68a35c9a4777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_nash, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:58:11 np0005533938 systemd[1]: Started libpod-conmon-c90fa050e9e7bda91466fe7d4df5924fd861fa86cc1042864eea68a35c9a4777.scope.
Nov 24 13:58:11 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:58:11 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c348d70639cf0c1b22ef428aecb2533c9f5e3d2adc2f73222a9a94aea11139/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:58:11 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c348d70639cf0c1b22ef428aecb2533c9f5e3d2adc2f73222a9a94aea11139/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:58:11 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c348d70639cf0c1b22ef428aecb2533c9f5e3d2adc2f73222a9a94aea11139/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:58:11 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c348d70639cf0c1b22ef428aecb2533c9f5e3d2adc2f73222a9a94aea11139/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:58:11 np0005533938 podman[296094]: 2025-11-24 18:58:11.083414064 +0000 UTC m=+0.115440373 container init c90fa050e9e7bda91466fe7d4df5924fd861fa86cc1042864eea68a35c9a4777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 13:58:11 np0005533938 podman[296094]: 2025-11-24 18:58:10.992183973 +0000 UTC m=+0.024210262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:58:11 np0005533938 podman[296094]: 2025-11-24 18:58:11.089217036 +0000 UTC m=+0.121243325 container start c90fa050e9e7bda91466fe7d4df5924fd861fa86cc1042864eea68a35c9a4777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_nash, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 24 13:58:11 np0005533938 podman[296094]: 2025-11-24 18:58:11.093009269 +0000 UTC m=+0.125035578 container attach c90fa050e9e7bda91466fe7d4df5924fd861fa86cc1042864eea68a35c9a4777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_nash, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:58:12 np0005533938 modest_nash[296110]: {
Nov 24 13:58:12 np0005533938 modest_nash[296110]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:58:12 np0005533938 modest_nash[296110]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:58:12 np0005533938 modest_nash[296110]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:58:12 np0005533938 modest_nash[296110]:        "osd_id": 0,
Nov 24 13:58:12 np0005533938 modest_nash[296110]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:58:12 np0005533938 modest_nash[296110]:        "type": "bluestore"
Nov 24 13:58:12 np0005533938 modest_nash[296110]:    },
Nov 24 13:58:12 np0005533938 modest_nash[296110]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:58:12 np0005533938 modest_nash[296110]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:58:12 np0005533938 modest_nash[296110]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:58:12 np0005533938 modest_nash[296110]:        "osd_id": 1,
Nov 24 13:58:12 np0005533938 modest_nash[296110]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:58:12 np0005533938 modest_nash[296110]:        "type": "bluestore"
Nov 24 13:58:12 np0005533938 modest_nash[296110]:    },
Nov 24 13:58:12 np0005533938 modest_nash[296110]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:58:12 np0005533938 modest_nash[296110]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:58:12 np0005533938 modest_nash[296110]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:58:12 np0005533938 modest_nash[296110]:        "osd_id": 2,
Nov 24 13:58:12 np0005533938 modest_nash[296110]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:58:12 np0005533938 modest_nash[296110]:        "type": "bluestore"
Nov 24 13:58:12 np0005533938 modest_nash[296110]:    }
Nov 24 13:58:12 np0005533938 modest_nash[296110]: }
Nov 24 13:58:12 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1310: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:12 np0005533938 systemd[1]: libpod-c90fa050e9e7bda91466fe7d4df5924fd861fa86cc1042864eea68a35c9a4777.scope: Deactivated successfully.
Nov 24 13:58:12 np0005533938 podman[296094]: 2025-11-24 18:58:12.074559748 +0000 UTC m=+1.106586037 container died c90fa050e9e7bda91466fe7d4df5924fd861fa86cc1042864eea68a35c9a4777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_nash, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 24 13:58:12 np0005533938 systemd[1]: var-lib-containers-storage-overlay-f4c348d70639cf0c1b22ef428aecb2533c9f5e3d2adc2f73222a9a94aea11139-merged.mount: Deactivated successfully.
Nov 24 13:58:12 np0005533938 podman[296094]: 2025-11-24 18:58:12.133280884 +0000 UTC m=+1.165307183 container remove c90fa050e9e7bda91466fe7d4df5924fd861fa86cc1042864eea68a35c9a4777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:58:12 np0005533938 systemd[1]: libpod-conmon-c90fa050e9e7bda91466fe7d4df5924fd861fa86cc1042864eea68a35c9a4777.scope: Deactivated successfully.
Nov 24 13:58:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:58:12 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:58:12 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:58:12 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:58:12 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev aa72c128-d6e2-40bd-9a2a-12bbdab8a3d7 does not exist
Nov 24 13:58:12 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 4ceddd02-442c-4ed0-bc5d-b9b08ebdcdca does not exist
Nov 24 13:58:13 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:58:13 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:58:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:58:14 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1311: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:16 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1312: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:17 np0005533938 nova_compute[270693]: 2025-11-24 18:58:17.523 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:58:18 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1313: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:58:18 np0005533938 nova_compute[270693]: 2025-11-24 18:58:18.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:58:18 np0005533938 nova_compute[270693]: 2025-11-24 18:58:18.563 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:58:18 np0005533938 nova_compute[270693]: 2025-11-24 18:58:18.563 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:58:18 np0005533938 nova_compute[270693]: 2025-11-24 18:58:18.564 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:58:18 np0005533938 nova_compute[270693]: 2025-11-24 18:58:18.564 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 13:58:18 np0005533938 nova_compute[270693]: 2025-11-24 18:58:18.565 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:58:19 np0005533938 podman[296225]: 2025-11-24 18:58:19.000787618 +0000 UTC m=+0.080384437 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 13:58:19 np0005533938 podman[296227]: 2025-11-24 18:58:19.025782259 +0000 UTC m=+0.098772926 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:58:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:58:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2804493441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:58:19 np0005533938 nova_compute[270693]: 2025-11-24 18:58:19.047 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:58:19 np0005533938 podman[296226]: 2025-11-24 18:58:19.067888698 +0000 UTC m=+0.144904704 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 24 13:58:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 13:58:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1061668998' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 13:58:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 13:58:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1061668998' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 13:58:19 np0005533938 nova_compute[270693]: 2025-11-24 18:58:19.243 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 13:58:19 np0005533938 nova_compute[270693]: 2025-11-24 18:58:19.245 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4984MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 13:58:19 np0005533938 nova_compute[270693]: 2025-11-24 18:58:19.245 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:58:19 np0005533938 nova_compute[270693]: 2025-11-24 18:58:19.245 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:58:19 np0005533938 nova_compute[270693]: 2025-11-24 18:58:19.308 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 13:58:19 np0005533938 nova_compute[270693]: 2025-11-24 18:58:19.308 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 13:58:19 np0005533938 nova_compute[270693]: 2025-11-24 18:58:19.331 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:58:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:58:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3810146975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:58:19 np0005533938 nova_compute[270693]: 2025-11-24 18:58:19.811 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:58:19 np0005533938 nova_compute[270693]: 2025-11-24 18:58:19.816 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 13:58:19 np0005533938 nova_compute[270693]: 2025-11-24 18:58:19.834 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 13:58:19 np0005533938 nova_compute[270693]: 2025-11-24 18:58:19.836 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 13:58:19 np0005533938 nova_compute[270693]: 2025-11-24 18:58:19.836 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:58:19 np0005533938 nova_compute[270693]: 2025-11-24 18:58:19.836 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:58:19 np0005533938 nova_compute[270693]: 2025-11-24 18:58:19.837 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 24 13:58:20 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1314: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:20 np0005533938 nova_compute[270693]: 2025-11-24 18:58:20.856 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:58:20 np0005533938 nova_compute[270693]: 2025-11-24 18:58:20.856 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 13:58:20 np0005533938 nova_compute[270693]: 2025-11-24 18:58:20.857 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 13:58:20 np0005533938 nova_compute[270693]: 2025-11-24 18:58:20.879 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 13:58:21 np0005533938 nova_compute[270693]: 2025-11-24 18:58:21.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:58:22 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1315: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:22 np0005533938 nova_compute[270693]: 2025-11-24 18:58:22.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:58:22 np0005533938 nova_compute[270693]: 2025-11-24 18:58:22.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:58:22 np0005533938 nova_compute[270693]: 2025-11-24 18:58:22.529 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 13:58:22 np0005533938 nova_compute[270693]: 2025-11-24 18:58:22.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:58:22 np0005533938 nova_compute[270693]: 2025-11-24 18:58:22.530 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 24 13:58:22 np0005533938 nova_compute[270693]: 2025-11-24 18:58:22.543 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 24 13:58:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:58:22.756 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:58:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:58:22.756 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:58:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:58:22.757 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:58:23 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:58:23 np0005533938 nova_compute[270693]: 2025-11-24 18:58:23.543 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:58:24 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1316: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:24 np0005533938 nova_compute[270693]: 2025-11-24 18:58:24.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:58:26 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1317: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:26 np0005533938 nova_compute[270693]: 2025-11-24 18:58:26.528 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:58:26 np0005533938 nova_compute[270693]: 2025-11-24 18:58:26.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:58:28 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1318: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:28 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:58:30 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:32 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1320: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:33 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:58:34 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1321: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:58:34
Nov 24 13:58:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:58:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:58:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'images', 'backups']
Nov 24 13:58:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:58:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:58:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:58:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:58:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:58:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:58:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:58:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:58:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:58:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:58:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:58:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:58:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:58:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:58:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:58:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:58:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:58:36 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1322: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:38 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1323: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:58:40 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1324: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:42 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1325: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:58:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:58:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:58:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:58:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:58:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 13:58:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:58:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 13:58:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:58:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:58:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:58:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 13:58:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:58:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:58:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:58:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:58:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:58:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:58:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:58:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:58:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:58:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:58:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:58:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:58:44 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1326: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:46 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1327: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:47 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:58:47 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6084 writes, 27K keys, 6084 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 6084 writes, 6084 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1448 writes, 6397 keys, 1448 commit groups, 1.0 writes per commit group, ingest: 9.34 MB, 0.02 MB/s#012Interval WAL: 1448 writes, 1448 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     95.6      0.32              0.09        15    0.021       0      0       0.0       0.0#012  L6      1/0    7.09 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4    192.6    156.6      0.65              0.29        14    0.047     64K   7856       0.0       0.0#012 Sum      1/0    7.09 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4    129.3    136.6      0.97              0.38        29    0.034     64K   7856       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.0    135.6    135.3      0.24              0.11         6    0.040     16K   2076       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    192.6    156.6      0.65              0.29        14    0.047     64K   7856       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     96.0      0.32              0.09        14    0.023       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     28.0      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.030, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.13 GB write, 0.06 MB/s write, 0.12 GB read, 0.05 MB/s read, 1.0 seconds#012Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562af0cfd1f0#2 capacity: 304.00 MB usage: 13.80 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.00013 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1008,13.28 MB,4.3687%) FilterBlock(30,186.30 KB,0.0598456%) IndexBlock(30,345.12 KB,0.110867%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 24 13:58:48 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1328: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:48 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:58:49 np0005533938 podman[296309]: 2025-11-24 18:58:49.991893524 +0000 UTC m=+0.089107090 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 24 13:58:50 np0005533938 podman[296311]: 2025-11-24 18:58:50.012814955 +0000 UTC m=+0.101971844 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 24 13:58:50 np0005533938 podman[296310]: 2025-11-24 18:58:50.047862412 +0000 UTC m=+0.131679651 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 24 13:58:50 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1329: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:52 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1330: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:53 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:58:54 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1331: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:56 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1332: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:58 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1333: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:58:58 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:59:00 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1334: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:59:02 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1335: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:59:03 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:59:04 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1336: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:59:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:59:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:59:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:59:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:59:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:59:04 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:59:06 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1337: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:59:08 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1338: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:59:08 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:59:10 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1339: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:59:12 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1340: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:59:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:59:13 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:59:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 24 13:59:13 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:59:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 24 13:59:13 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:59:13 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 36b1f83f-e10d-4975-b856-bfc81e46ef94 does not exist
Nov 24 13:59:13 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev aa2e5981-c588-4ffb-baf0-87721f6129cc does not exist
Nov 24 13:59:13 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 1d4e5dd6-e2f8-4a8d-8483-c8a71f82499c does not exist
Nov 24 13:59:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 24 13:59:13 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 24 13:59:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 24 13:59:13 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:59:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:59:13 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:59:13 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:59:13 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 24 13:59:13 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:59:13 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 24 13:59:13 np0005533938 podman[296649]: 2025-11-24 18:59:13.789081223 +0000 UTC m=+0.045872692 container create af60b8fb2825d19fcffdb0136d197073affcef9aae476f7550010bde57ef91b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wu, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:59:13 np0005533938 systemd[1]: Started libpod-conmon-af60b8fb2825d19fcffdb0136d197073affcef9aae476f7550010bde57ef91b2.scope.
Nov 24 13:59:13 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:59:13 np0005533938 podman[296649]: 2025-11-24 18:59:13.770573911 +0000 UTC m=+0.027365370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:59:13 np0005533938 podman[296649]: 2025-11-24 18:59:13.876606103 +0000 UTC m=+0.133397572 container init af60b8fb2825d19fcffdb0136d197073affcef9aae476f7550010bde57ef91b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 13:59:13 np0005533938 podman[296649]: 2025-11-24 18:59:13.887010547 +0000 UTC m=+0.143802036 container start af60b8fb2825d19fcffdb0136d197073affcef9aae476f7550010bde57ef91b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wu, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:59:13 np0005533938 goofy_wu[296665]: 167 167
Nov 24 13:59:13 np0005533938 systemd[1]: libpod-af60b8fb2825d19fcffdb0136d197073affcef9aae476f7550010bde57ef91b2.scope: Deactivated successfully.
Nov 24 13:59:13 np0005533938 podman[296649]: 2025-11-24 18:59:13.891803455 +0000 UTC m=+0.148594944 container attach af60b8fb2825d19fcffdb0136d197073affcef9aae476f7550010bde57ef91b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 24 13:59:13 np0005533938 podman[296649]: 2025-11-24 18:59:13.892736407 +0000 UTC m=+0.149527846 container died af60b8fb2825d19fcffdb0136d197073affcef9aae476f7550010bde57ef91b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wu, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:59:13 np0005533938 systemd[1]: var-lib-containers-storage-overlay-963f4cf3ded6cbfa2e46865697cb195d87b9a7b29eda3a4e9ae3112f3c572c77-merged.mount: Deactivated successfully.
Nov 24 13:59:13 np0005533938 podman[296649]: 2025-11-24 18:59:13.934786746 +0000 UTC m=+0.191578175 container remove af60b8fb2825d19fcffdb0136d197073affcef9aae476f7550010bde57ef91b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:59:13 np0005533938 systemd[1]: libpod-conmon-af60b8fb2825d19fcffdb0136d197073affcef9aae476f7550010bde57ef91b2.scope: Deactivated successfully.
Nov 24 13:59:14 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1341: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:59:14 np0005533938 podman[296687]: 2025-11-24 18:59:14.117237407 +0000 UTC m=+0.048774434 container create 90af680f629c4b331c9c1ed9f733fb57146f74c54ed52a77a9f972a167ff8616 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 24 13:59:14 np0005533938 systemd[1]: Started libpod-conmon-90af680f629c4b331c9c1ed9f733fb57146f74c54ed52a77a9f972a167ff8616.scope.
Nov 24 13:59:14 np0005533938 podman[296687]: 2025-11-24 18:59:14.097125425 +0000 UTC m=+0.028662542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:59:14 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:59:14 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdb0d51f2de07d09876b2c569e0e518479eae1d532f47f41c37ba5a33444f32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:59:14 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdb0d51f2de07d09876b2c569e0e518479eae1d532f47f41c37ba5a33444f32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:59:14 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdb0d51f2de07d09876b2c569e0e518479eae1d532f47f41c37ba5a33444f32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:59:14 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdb0d51f2de07d09876b2c569e0e518479eae1d532f47f41c37ba5a33444f32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:59:14 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fdb0d51f2de07d09876b2c569e0e518479eae1d532f47f41c37ba5a33444f32/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 24 13:59:14 np0005533938 podman[296687]: 2025-11-24 18:59:14.225037822 +0000 UTC m=+0.156574939 container init 90af680f629c4b331c9c1ed9f733fb57146f74c54ed52a77a9f972a167ff8616 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 24 13:59:14 np0005533938 podman[296687]: 2025-11-24 18:59:14.233754776 +0000 UTC m=+0.165291813 container start 90af680f629c4b331c9c1ed9f733fb57146f74c54ed52a77a9f972a167ff8616 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dijkstra, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Nov 24 13:59:14 np0005533938 podman[296687]: 2025-11-24 18:59:14.237496967 +0000 UTC m=+0.169034084 container attach 90af680f629c4b331c9c1ed9f733fb57146f74c54ed52a77a9f972a167ff8616 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 24 13:59:15 np0005533938 quizzical_dijkstra[296704]: --> passed data devices: 0 physical, 3 LVM
Nov 24 13:59:15 np0005533938 quizzical_dijkstra[296704]: --> relative data size: 1.0
Nov 24 13:59:15 np0005533938 quizzical_dijkstra[296704]: --> All data devices are unavailable
Nov 24 13:59:15 np0005533938 systemd[1]: libpod-90af680f629c4b331c9c1ed9f733fb57146f74c54ed52a77a9f972a167ff8616.scope: Deactivated successfully.
Nov 24 13:59:15 np0005533938 systemd[1]: libpod-90af680f629c4b331c9c1ed9f733fb57146f74c54ed52a77a9f972a167ff8616.scope: Consumed 1.062s CPU time.
Nov 24 13:59:15 np0005533938 podman[296733]: 2025-11-24 18:59:15.40773391 +0000 UTC m=+0.036857112 container died 90af680f629c4b331c9c1ed9f733fb57146f74c54ed52a77a9f972a167ff8616 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dijkstra, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:59:15 np0005533938 systemd[1]: var-lib-containers-storage-overlay-4fdb0d51f2de07d09876b2c569e0e518479eae1d532f47f41c37ba5a33444f32-merged.mount: Deactivated successfully.
Nov 24 13:59:15 np0005533938 podman[296733]: 2025-11-24 18:59:15.481517724 +0000 UTC m=+0.110640896 container remove 90af680f629c4b331c9c1ed9f733fb57146f74c54ed52a77a9f972a167ff8616 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dijkstra, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:59:15 np0005533938 systemd[1]: libpod-conmon-90af680f629c4b331c9c1ed9f733fb57146f74c54ed52a77a9f972a167ff8616.scope: Deactivated successfully.
Nov 24 13:59:16 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1342: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:59:16 np0005533938 podman[296889]: 2025-11-24 18:59:16.207063744 +0000 UTC m=+0.061855043 container create 1a3e5b7fcdb37f7b48110a8e844299d102693564e3c37f2d9bc664e1c070de66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jang, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 13:59:16 np0005533938 systemd[1]: Started libpod-conmon-1a3e5b7fcdb37f7b48110a8e844299d102693564e3c37f2d9bc664e1c070de66.scope.
Nov 24 13:59:16 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:59:16 np0005533938 podman[296889]: 2025-11-24 18:59:16.181762716 +0000 UTC m=+0.036554065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:59:16 np0005533938 podman[296889]: 2025-11-24 18:59:16.288896335 +0000 UTC m=+0.143803277 container init 1a3e5b7fcdb37f7b48110a8e844299d102693564e3c37f2d9bc664e1c070de66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jang, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 24 13:59:16 np0005533938 podman[296889]: 2025-11-24 18:59:16.301807701 +0000 UTC m=+0.156598980 container start 1a3e5b7fcdb37f7b48110a8e844299d102693564e3c37f2d9bc664e1c070de66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jang, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:59:16 np0005533938 podman[296889]: 2025-11-24 18:59:16.306184128 +0000 UTC m=+0.160975477 container attach 1a3e5b7fcdb37f7b48110a8e844299d102693564e3c37f2d9bc664e1c070de66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 24 13:59:16 np0005533938 suspicious_jang[296905]: 167 167
Nov 24 13:59:16 np0005533938 systemd[1]: libpod-1a3e5b7fcdb37f7b48110a8e844299d102693564e3c37f2d9bc664e1c070de66.scope: Deactivated successfully.
Nov 24 13:59:16 np0005533938 podman[296889]: 2025-11-24 18:59:16.307413828 +0000 UTC m=+0.162205097 container died 1a3e5b7fcdb37f7b48110a8e844299d102693564e3c37f2d9bc664e1c070de66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 24 13:59:16 np0005533938 systemd[1]: var-lib-containers-storage-overlay-b456baebdc80d427952cb544439d875d3a8453e8032b4868227f93cabce9d067-merged.mount: Deactivated successfully.
Nov 24 13:59:16 np0005533938 podman[296889]: 2025-11-24 18:59:16.357078922 +0000 UTC m=+0.211870221 container remove 1a3e5b7fcdb37f7b48110a8e844299d102693564e3c37f2d9bc664e1c070de66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 24 13:59:16 np0005533938 systemd[1]: libpod-conmon-1a3e5b7fcdb37f7b48110a8e844299d102693564e3c37f2d9bc664e1c070de66.scope: Deactivated successfully.
Nov 24 13:59:16 np0005533938 podman[296930]: 2025-11-24 18:59:16.597006069 +0000 UTC m=+0.035715625 container create 51cfc1b5cad5a28054cfabacc264931cf903990ad81c9e6ee6655074d34d19e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:59:16 np0005533938 systemd[1]: Started libpod-conmon-51cfc1b5cad5a28054cfabacc264931cf903990ad81c9e6ee6655074d34d19e2.scope.
Nov 24 13:59:16 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:59:16 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e5dd399732e80633bd42a95da43c27a367b1fc983d35aa0e4945bf535407a35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:59:16 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e5dd399732e80633bd42a95da43c27a367b1fc983d35aa0e4945bf535407a35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:59:16 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e5dd399732e80633bd42a95da43c27a367b1fc983d35aa0e4945bf535407a35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:59:16 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e5dd399732e80633bd42a95da43c27a367b1fc983d35aa0e4945bf535407a35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:59:16 np0005533938 podman[296930]: 2025-11-24 18:59:16.57946832 +0000 UTC m=+0.018177886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:59:16 np0005533938 podman[296930]: 2025-11-24 18:59:16.678426409 +0000 UTC m=+0.117135995 container init 51cfc1b5cad5a28054cfabacc264931cf903990ad81c9e6ee6655074d34d19e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 24 13:59:16 np0005533938 podman[296930]: 2025-11-24 18:59:16.6870255 +0000 UTC m=+0.125735036 container start 51cfc1b5cad5a28054cfabacc264931cf903990ad81c9e6ee6655074d34d19e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cannon, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 24 13:59:16 np0005533938 podman[296930]: 2025-11-24 18:59:16.69029646 +0000 UTC m=+0.129006046 container attach 51cfc1b5cad5a28054cfabacc264931cf903990ad81c9e6ee6655074d34d19e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:59:17 np0005533938 confident_cannon[296947]: {
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:    "0": [
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:        {
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "devices": [
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "/dev/loop3"
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            ],
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "lv_name": "ceph_lv0",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "lv_size": "21470642176",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f8f8fab-5f72-4f8f-b22f-80baf46bd30b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "lv_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "name": "ceph_lv0",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "tags": {
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.block_uuid": "Sfr7SH-Egb7-P17k-zug3-wdne-Lhos-ZYWBPW",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.cluster_name": "ceph",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.crush_device_class": "",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.encrypted": "0",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.osd_fsid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.osd_id": "0",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.type": "block",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.vdo": "0"
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            },
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "type": "block",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "vg_name": "ceph_vg0"
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:        }
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:    ],
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:    "1": [
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:        {
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "devices": [
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "/dev/loop4"
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            ],
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "lv_name": "ceph_lv1",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "lv_size": "21470642176",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79b9678c-793a-417c-9179-1829e79d1a19,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "lv_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "name": "ceph_lv1",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "tags": {
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.block_uuid": "TUSaRK-Z5eh-O1g1-WhIN-fwpl-3Mcu-Bppica",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.cluster_name": "ceph",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.crush_device_class": "",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.encrypted": "0",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.osd_fsid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.osd_id": "1",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.type": "block",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.vdo": "0"
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            },
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "type": "block",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "vg_name": "ceph_vg1"
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:        }
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:    ],
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:    "2": [
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:        {
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "devices": [
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "/dev/loop5"
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            ],
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "lv_name": "ceph_lv2",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "lv_size": "21470642176",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e5ee928f-099b-569b-93c9-ecf025cbb50d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d6904eab-3369-4532-8b99-18f2965a8556,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "lv_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "name": "ceph_lv2",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "path": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "tags": {
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.block_uuid": "iWQsGy-9tLj-1ufy-DVJX-4bk0-TheD-iECXN2",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.cephx_lockbox_secret": "",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.cluster_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.cluster_name": "ceph",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.crush_device_class": "",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.encrypted": "0",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.osd_fsid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.osd_id": "2",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.type": "block",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:                "ceph.vdo": "0"
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            },
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "type": "block",
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:            "vg_name": "ceph_vg2"
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:        }
Nov 24 13:59:17 np0005533938 confident_cannon[296947]:    ]
Nov 24 13:59:17 np0005533938 confident_cannon[296947]: }
Nov 24 13:59:17 np0005533938 systemd[1]: libpod-51cfc1b5cad5a28054cfabacc264931cf903990ad81c9e6ee6655074d34d19e2.scope: Deactivated successfully.
Nov 24 13:59:17 np0005533938 podman[296930]: 2025-11-24 18:59:17.409414702 +0000 UTC m=+0.848124268 container died 51cfc1b5cad5a28054cfabacc264931cf903990ad81c9e6ee6655074d34d19e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cannon, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 24 13:59:17 np0005533938 systemd[1]: var-lib-containers-storage-overlay-0e5dd399732e80633bd42a95da43c27a367b1fc983d35aa0e4945bf535407a35-merged.mount: Deactivated successfully.
Nov 24 13:59:17 np0005533938 podman[296930]: 2025-11-24 18:59:17.46834203 +0000 UTC m=+0.907051576 container remove 51cfc1b5cad5a28054cfabacc264931cf903990ad81c9e6ee6655074d34d19e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 24 13:59:17 np0005533938 systemd[1]: libpod-conmon-51cfc1b5cad5a28054cfabacc264931cf903990ad81c9e6ee6655074d34d19e2.scope: Deactivated successfully.
Nov 24 13:59:18 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1343: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:59:18 np0005533938 podman[297108]: 2025-11-24 18:59:18.127662277 +0000 UTC m=+0.041009273 container create 2c93574e3ad3048827097ecc141da926436074fe19660a8c76debe5347b4795c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wright, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 24 13:59:18 np0005533938 systemd[1]: Started libpod-conmon-2c93574e3ad3048827097ecc141da926436074fe19660a8c76debe5347b4795c.scope.
Nov 24 13:59:18 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:59:18 np0005533938 podman[297108]: 2025-11-24 18:59:18.108237178 +0000 UTC m=+0.021584194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:59:18 np0005533938 podman[297108]: 2025-11-24 18:59:18.222312882 +0000 UTC m=+0.135659958 container init 2c93574e3ad3048827097ecc141da926436074fe19660a8c76debe5347b4795c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wright, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 24 13:59:18 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:59:18 np0005533938 podman[297108]: 2025-11-24 18:59:18.236022721 +0000 UTC m=+0.149369697 container start 2c93574e3ad3048827097ecc141da926436074fe19660a8c76debe5347b4795c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 24 13:59:18 np0005533938 podman[297108]: 2025-11-24 18:59:18.239435335 +0000 UTC m=+0.152782351 container attach 2c93574e3ad3048827097ecc141da926436074fe19660a8c76debe5347b4795c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 24 13:59:18 np0005533938 vigorous_wright[297124]: 167 167
Nov 24 13:59:18 np0005533938 podman[297108]: 2025-11-24 18:59:18.243054984 +0000 UTC m=+0.156402000 container died 2c93574e3ad3048827097ecc141da926436074fe19660a8c76debe5347b4795c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wright, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 24 13:59:18 np0005533938 systemd[1]: libpod-2c93574e3ad3048827097ecc141da926436074fe19660a8c76debe5347b4795c.scope: Deactivated successfully.
Nov 24 13:59:18 np0005533938 systemd[1]: var-lib-containers-storage-overlay-973632bcda735c76b2d9b352e9c7f2f559efc05c1ad8b64fc3e9e440e51b7f1a-merged.mount: Deactivated successfully.
Nov 24 13:59:18 np0005533938 podman[297108]: 2025-11-24 18:59:18.288250009 +0000 UTC m=+0.201596985 container remove 2c93574e3ad3048827097ecc141da926436074fe19660a8c76debe5347b4795c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 24 13:59:18 np0005533938 systemd[1]: libpod-conmon-2c93574e3ad3048827097ecc141da926436074fe19660a8c76debe5347b4795c.scope: Deactivated successfully.
Nov 24 13:59:18 np0005533938 podman[297149]: 2025-11-24 18:59:18.463614376 +0000 UTC m=+0.039349332 container create 2037551e3baa025b0c6a4f6b15037cfb534da03b0ba0c96a0a00b70cfdd41b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bardeen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 24 13:59:18 np0005533938 systemd[1]: Started libpod-conmon-2037551e3baa025b0c6a4f6b15037cfb534da03b0ba0c96a0a00b70cfdd41b53.scope.
Nov 24 13:59:18 np0005533938 systemd[1]: Started libcrun container.
Nov 24 13:59:18 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e6cf385473b39befbea846533302760ef1ca895fc832108c41da4a460ecfdeb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 24 13:59:18 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e6cf385473b39befbea846533302760ef1ca895fc832108c41da4a460ecfdeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 24 13:59:18 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e6cf385473b39befbea846533302760ef1ca895fc832108c41da4a460ecfdeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 24 13:59:18 np0005533938 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e6cf385473b39befbea846533302760ef1ca895fc832108c41da4a460ecfdeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 24 13:59:18 np0005533938 podman[297149]: 2025-11-24 18:59:18.536022283 +0000 UTC m=+0.111757259 container init 2037551e3baa025b0c6a4f6b15037cfb534da03b0ba0c96a0a00b70cfdd41b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 24 13:59:18 np0005533938 podman[297149]: 2025-11-24 18:59:18.445834367 +0000 UTC m=+0.021569353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 24 13:59:18 np0005533938 podman[297149]: 2025-11-24 18:59:18.543829135 +0000 UTC m=+0.119564091 container start 2037551e3baa025b0c6a4f6b15037cfb534da03b0ba0c96a0a00b70cfdd41b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bardeen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 24 13:59:18 np0005533938 podman[297149]: 2025-11-24 18:59:18.54687531 +0000 UTC m=+0.122610276 container attach 2037551e3baa025b0c6a4f6b15037cfb534da03b0ba0c96a0a00b70cfdd41b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bardeen, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 24 13:59:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 24 13:59:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/13639317' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 24 13:59:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 24 13:59:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/13639317' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 24 13:59:19 np0005533938 trusting_bardeen[297165]: {
Nov 24 13:59:19 np0005533938 trusting_bardeen[297165]:    "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b": {
Nov 24 13:59:19 np0005533938 trusting_bardeen[297165]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:59:19 np0005533938 trusting_bardeen[297165]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 24 13:59:19 np0005533938 trusting_bardeen[297165]:        "osd_id": 0,
Nov 24 13:59:19 np0005533938 trusting_bardeen[297165]:        "osd_uuid": "1f8f8fab-5f72-4f8f-b22f-80baf46bd30b",
Nov 24 13:59:19 np0005533938 trusting_bardeen[297165]:        "type": "bluestore"
Nov 24 13:59:19 np0005533938 trusting_bardeen[297165]:    },
Nov 24 13:59:19 np0005533938 trusting_bardeen[297165]:    "79b9678c-793a-417c-9179-1829e79d1a19": {
Nov 24 13:59:19 np0005533938 trusting_bardeen[297165]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:59:19 np0005533938 trusting_bardeen[297165]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 24 13:59:19 np0005533938 trusting_bardeen[297165]:        "osd_id": 1,
Nov 24 13:59:19 np0005533938 trusting_bardeen[297165]:        "osd_uuid": "79b9678c-793a-417c-9179-1829e79d1a19",
Nov 24 13:59:19 np0005533938 trusting_bardeen[297165]:        "type": "bluestore"
Nov 24 13:59:19 np0005533938 trusting_bardeen[297165]:    },
Nov 24 13:59:19 np0005533938 trusting_bardeen[297165]:    "d6904eab-3369-4532-8b99-18f2965a8556": {
Nov 24 13:59:19 np0005533938 trusting_bardeen[297165]:        "ceph_fsid": "e5ee928f-099b-569b-93c9-ecf025cbb50d",
Nov 24 13:59:19 np0005533938 trusting_bardeen[297165]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 24 13:59:19 np0005533938 trusting_bardeen[297165]:        "osd_id": 2,
Nov 24 13:59:19 np0005533938 trusting_bardeen[297165]:        "osd_uuid": "d6904eab-3369-4532-8b99-18f2965a8556",
Nov 24 13:59:19 np0005533938 trusting_bardeen[297165]:        "type": "bluestore"
Nov 24 13:59:19 np0005533938 trusting_bardeen[297165]:    }
Nov 24 13:59:19 np0005533938 trusting_bardeen[297165]: }
Nov 24 13:59:19 np0005533938 nova_compute[270693]: 2025-11-24 18:59:19.540 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:59:19 np0005533938 nova_compute[270693]: 2025-11-24 18:59:19.542 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:59:19 np0005533938 nova_compute[270693]: 2025-11-24 18:59:19.569 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:59:19 np0005533938 nova_compute[270693]: 2025-11-24 18:59:19.570 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:59:19 np0005533938 nova_compute[270693]: 2025-11-24 18:59:19.570 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:59:19 np0005533938 nova_compute[270693]: 2025-11-24 18:59:19.570 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 24 13:59:19 np0005533938 nova_compute[270693]: 2025-11-24 18:59:19.571 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:59:19 np0005533938 systemd[1]: libpod-2037551e3baa025b0c6a4f6b15037cfb534da03b0ba0c96a0a00b70cfdd41b53.scope: Deactivated successfully.
Nov 24 13:59:19 np0005533938 systemd[1]: libpod-2037551e3baa025b0c6a4f6b15037cfb534da03b0ba0c96a0a00b70cfdd41b53.scope: Consumed 1.033s CPU time.
Nov 24 13:59:19 np0005533938 podman[297149]: 2025-11-24 18:59:19.572081105 +0000 UTC m=+1.147816061 container died 2037551e3baa025b0c6a4f6b15037cfb534da03b0ba0c96a0a00b70cfdd41b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 24 13:59:19 np0005533938 systemd[1]: var-lib-containers-storage-overlay-6e6cf385473b39befbea846533302760ef1ca895fc832108c41da4a460ecfdeb-merged.mount: Deactivated successfully.
Nov 24 13:59:19 np0005533938 podman[297149]: 2025-11-24 18:59:19.641697363 +0000 UTC m=+1.217432329 container remove 2037551e3baa025b0c6a4f6b15037cfb534da03b0ba0c96a0a00b70cfdd41b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bardeen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 24 13:59:19 np0005533938 systemd[1]: libpod-conmon-2037551e3baa025b0c6a4f6b15037cfb534da03b0ba0c96a0a00b70cfdd41b53.scope: Deactivated successfully.
Nov 24 13:59:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 24 13:59:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:59:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 24 13:59:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:59:19 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 4cacfeab-dfbd-4d35-9054-a98f6c3015e1 does not exist
Nov 24 13:59:19 np0005533938 ceph-mgr[75218]: [progress WARNING root] complete: ev 81e19dc3-132f-44ad-9049-8bb7e57b6e4c does not exist
Nov 24 13:59:19 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:59:19 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2596783831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:59:19 np0005533938 nova_compute[270693]: 2025-11-24 18:59:19.994 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:59:20 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1344: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:59:20 np0005533938 nova_compute[270693]: 2025-11-24 18:59:20.123 270697 WARNING nova.virt.libvirt.driver [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 24 13:59:20 np0005533938 nova_compute[270693]: 2025-11-24 18:59:20.124 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4972MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 24 13:59:20 np0005533938 nova_compute[270693]: 2025-11-24 18:59:20.124 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:59:20 np0005533938 nova_compute[270693]: 2025-11-24 18:59:20.125 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:59:20 np0005533938 nova_compute[270693]: 2025-11-24 18:59:20.291 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 24 13:59:20 np0005533938 nova_compute[270693]: 2025-11-24 18:59:20.291 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 24 13:59:20 np0005533938 nova_compute[270693]: 2025-11-24 18:59:20.386 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Refreshing inventories for resource provider d1cce7ec-de83-4810-91f8-1852891da8a6 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 24 13:59:20 np0005533938 nova_compute[270693]: 2025-11-24 18:59:20.466 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Updating ProviderTree inventory for provider d1cce7ec-de83-4810-91f8-1852891da8a6 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 24 13:59:20 np0005533938 nova_compute[270693]: 2025-11-24 18:59:20.467 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Updating inventory in ProviderTree for provider d1cce7ec-de83-4810-91f8-1852891da8a6 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 24 13:59:20 np0005533938 nova_compute[270693]: 2025-11-24 18:59:20.484 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Refreshing aggregate associations for resource provider d1cce7ec-de83-4810-91f8-1852891da8a6, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 24 13:59:20 np0005533938 nova_compute[270693]: 2025-11-24 18:59:20.509 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Refreshing trait associations for resource provider d1cce7ec-de83-4810-91f8-1852891da8a6, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_FMA3,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE4A,HW_CPU_X86_F16C,HW_CPU_X86_SSE2,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_E1000 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 24 13:59:20 np0005533938 nova_compute[270693]: 2025-11-24 18:59:20.529 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 24 13:59:20 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:59:20 np0005533938 ceph-mon[74927]: from='mgr.14132 192.168.122.100:0/873337789' entity='mgr.compute-0.dfqptp' 
Nov 24 13:59:20 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 24 13:59:20 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/600938882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 24 13:59:20 np0005533938 nova_compute[270693]: 2025-11-24 18:59:20.960 270697 DEBUG oslo_concurrency.processutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 24 13:59:20 np0005533938 nova_compute[270693]: 2025-11-24 18:59:20.967 270697 DEBUG nova.compute.provider_tree [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed in ProviderTree for provider: d1cce7ec-de83-4810-91f8-1852891da8a6 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 24 13:59:20 np0005533938 podman[297304]: 2025-11-24 18:59:20.991143016 +0000 UTC m=+0.078873296 container health_status 016a20f4087684009add8e029f803e96f64f8b87187e5e93626a1846a395bcbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 24 13:59:20 np0005533938 podman[297306]: 2025-11-24 18:59:20.992122751 +0000 UTC m=+0.068735096 container health_status e9c0ef7e27de8c634c7173f8b0784ad71d54cf46045ec0dde10fc1049ace0514 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 24 13:59:21 np0005533938 podman[297305]: 2025-11-24 18:59:21.022731126 +0000 UTC m=+0.107658736 container health_status 258bc419eab388fa11a59c8b21ee192dbf728e211567f3163ff145a5d729ac9d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 24 13:59:21 np0005533938 nova_compute[270693]: 2025-11-24 18:59:21.025 270697 DEBUG nova.scheduler.client.report [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Inventory has not changed for provider d1cce7ec-de83-4810-91f8-1852891da8a6 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 24 13:59:21 np0005533938 nova_compute[270693]: 2025-11-24 18:59:21.026 270697 DEBUG nova.compute.resource_tracker [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 24 13:59:21 np0005533938 nova_compute[270693]: 2025-11-24 18:59:21.026 270697 DEBUG oslo_concurrency.lockutils [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:59:22 np0005533938 nova_compute[270693]: 2025-11-24 18:59:22.014 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:59:22 np0005533938 nova_compute[270693]: 2025-11-24 18:59:22.032 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:59:22 np0005533938 nova_compute[270693]: 2025-11-24 18:59:22.032 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 24 13:59:22 np0005533938 nova_compute[270693]: 2025-11-24 18:59:22.033 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 24 13:59:22 np0005533938 nova_compute[270693]: 2025-11-24 18:59:22.045 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 24 13:59:22 np0005533938 nova_compute[270693]: 2025-11-24 18:59:22.046 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:59:22 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1345: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:59:22 np0005533938 nova_compute[270693]: 2025-11-24 18:59:22.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:59:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:59:22.757 179763 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 24 13:59:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:59:22.758 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 24 13:59:22 np0005533938 ovn_metadata_agent[179758]: 2025-11-24 18:59:22.758 179763 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 24 13:59:23 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:59:24 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1346: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:59:24 np0005533938 nova_compute[270693]: 2025-11-24 18:59:24.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:59:24 np0005533938 nova_compute[270693]: 2025-11-24 18:59:24.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:59:24 np0005533938 nova_compute[270693]: 2025-11-24 18:59:24.529 270697 DEBUG nova.compute.manager [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 24 13:59:26 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1347: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:59:26 np0005533938 nova_compute[270693]: 2025-11-24 18:59:26.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:59:26 np0005533938 nova_compute[270693]: 2025-11-24 18:59:26.529 270697 DEBUG oslo_service.periodic_task [None req-834cc35f-c932-47d8-a2fc-91b41fef2015 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.759106) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010766759143, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2057, "num_deletes": 251, "total_data_size": 3522424, "memory_usage": 3586336, "flush_reason": "Manual Compaction"}
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010766802393, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3456683, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25890, "largest_seqno": 27946, "table_properties": {"data_size": 3447104, "index_size": 6137, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18750, "raw_average_key_size": 20, "raw_value_size": 3428279, "raw_average_value_size": 3678, "num_data_blocks": 272, "num_entries": 932, "num_filter_entries": 932, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764010532, "oldest_key_time": 1764010532, "file_creation_time": 1764010766, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 43329 microseconds, and 9272 cpu microseconds.
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.802434) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3456683 bytes OK
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.802451) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.808930) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.808947) EVENT_LOG_v1 {"time_micros": 1764010766808942, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.808964) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3513798, prev total WAL file size 3513798, number of live WAL files 2.
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.809980) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3375KB)], [59(7255KB)]
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010766810015, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10885885, "oldest_snapshot_seqno": -1}
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5247 keys, 9092319 bytes, temperature: kUnknown
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010766853501, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9092319, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9055541, "index_size": 22588, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13125, "raw_key_size": 130647, "raw_average_key_size": 24, "raw_value_size": 8958964, "raw_average_value_size": 1707, "num_data_blocks": 931, "num_entries": 5247, "num_filter_entries": 5247, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764008324, "oldest_key_time": 0, "file_creation_time": 1764010766, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5bcbf129-cc59-4441-a37f-051fd374ef44", "db_session_id": "WW3CBZDUF00LP3K0CKDH", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.853758) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9092319 bytes
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.855096) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 249.9 rd, 208.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.1 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(5.8) write-amplify(2.6) OK, records in: 5761, records dropped: 514 output_compression: NoCompression
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.855116) EVENT_LOG_v1 {"time_micros": 1764010766855107, "job": 32, "event": "compaction_finished", "compaction_time_micros": 43553, "compaction_time_cpu_micros": 22702, "output_level": 6, "num_output_files": 1, "total_output_size": 9092319, "num_input_records": 5761, "num_output_records": 5247, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010766856003, "job": 32, "event": "table_file_deletion", "file_number": 61}
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764010766857740, "job": 32, "event": "table_file_deletion", "file_number": 59}
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.809854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.857792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.857797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.857799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.857801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:59:26 np0005533938 ceph-mon[74927]: rocksdb: (Original Log Time 2025/11/24-18:59:26.857802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 24 13:59:28 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1348: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:59:28 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:59:29 np0005533938 systemd-logind[822]: New session 57 of user zuul.
Nov 24 13:59:29 np0005533938 systemd[1]: Started Session 57 of User zuul.
Nov 24 13:59:30 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1349: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:59:32 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15049 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:59:32 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1350: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:59:32 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15051 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:59:33 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 24 13:59:33 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2279853532' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 24 13:59:33 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:59:34 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1351: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:59:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Optimize plan auto_2025-11-24_18:59:34
Nov 24 13:59:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 24 13:59:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] do_upmap
Nov 24 13:59:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'volumes', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control']
Nov 24 13:59:34 np0005533938 ceph-mgr[75218]: [balancer INFO root] prepared 0/10 changes
Nov 24 13:59:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:59:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:59:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:59:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:59:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] scanning for idle connections..
Nov 24 13:59:34 np0005533938 ceph-mgr[75218]: [volumes INFO mgr_util] cleaning up connections: []
Nov 24 13:59:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 24 13:59:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:59:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 24 13:59:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 24 13:59:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:59:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:59:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 24 13:59:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 24 13:59:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:59:34 np0005533938 ceph-mgr[75218]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 24 13:59:35 np0005533938 ovs-vsctl[297654]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 24 13:59:36 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1352: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:59:36 np0005533938 virtqemud[270425]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 24 13:59:36 np0005533938 virtqemud[270425]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 24 13:59:36 np0005533938 virtqemud[270425]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 24 13:59:37 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: cache status {prefix=cache status} (starting...)
Nov 24 13:59:37 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: client ls {prefix=client ls} (starting...)
Nov 24 13:59:37 np0005533938 lvm[298019]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 24 13:59:37 np0005533938 lvm[298019]: VG ceph_vg2 finished
Nov 24 13:59:37 np0005533938 lvm[298022]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 24 13:59:37 np0005533938 lvm[298022]: VG ceph_vg0 finished
Nov 24 13:59:37 np0005533938 lvm[298027]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 24 13:59:37 np0005533938 lvm[298027]: VG ceph_vg1 finished
Nov 24 13:59:37 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15055 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:59:38 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1353: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:59:38 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: damage ls {prefix=damage ls} (starting...)
Nov 24 13:59:38 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15057 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:59:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:59:38 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: dump loads {prefix=dump loads} (starting...)
Nov 24 13:59:38 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 24 13:59:38 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 24 13:59:38 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 24 13:59:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 24 13:59:38 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2224797317' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 24 13:59:38 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 24 13:59:38 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15063 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:59:38 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:59:38.934+0000 7f6377bb5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 24 13:59:38 np0005533938 ceph-mgr[75218]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 24 13:59:38 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 24 13:59:38 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 24 13:59:38 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1715020211' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 24 13:59:39 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 24 13:59:39 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 24 13:59:39 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3820292158' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 24 13:59:39 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: ops {prefix=ops} (starting...)
Nov 24 13:59:39 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 24 13:59:39 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2861406773' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 24 13:59:39 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 24 13:59:39 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1229367238' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 24 13:59:39 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 24 13:59:39 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/278316039' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 13:59:39 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: session ls {prefix=session ls} (starting...)
Nov 24 13:59:40 np0005533938 ceph-mds[101380]: mds.cephfs.compute-0.apnhwb asok_command: status {prefix=status} (starting...)
Nov 24 13:59:40 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15075 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:59:40 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1354: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:59:40 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 24 13:59:40 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2241713775' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 13:59:40 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15079 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:59:40 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 24 13:59:40 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2005516369' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 13:59:40 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 24 13:59:40 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2172093607' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 24 13:59:41 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 24 13:59:41 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1521262031' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 13:59:41 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 24 13:59:41 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2988803983' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 24 13:59:41 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 24 13:59:41 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3890587240' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 24 13:59:41 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15091 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:59:41 np0005533938 ceph-mgr[75218]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 24 13:59:41 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:59:41.644+0000 7f6377bb5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 24 13:59:41 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 24 13:59:41 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2341368656' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 13:59:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 24 13:59:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/691508376' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 24 13:59:42 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1355: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:59:42 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15097 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:59:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 24 13:59:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3315380950' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 24 13:59:42 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15101 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:59:42 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15105 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:59:42 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 24 13:59:42 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3075409262' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 24 13:59:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 909312 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66142208 unmapped: 909312 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 901120 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 901120 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 901120 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 892928 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 892928 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 884736 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 884736 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 884736 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 884736 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 876544 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 876544 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 868352 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 868352 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 860160 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 860160 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 860160 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 851968 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 851968 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 851968 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 843776 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 843776 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 843776 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 835584 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 835584 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 835584 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 827392 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 827392 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 819200 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 811008 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 802816 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 802816 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 802816 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 794624 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 794624 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 794624 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 786432 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 786432 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 786432 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 778240 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 778240 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 770048 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 770048 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66281472 unmapped: 770048 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 761856 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 753664 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 745472 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 745472 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 745472 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 737280 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 737280 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 729088 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 729088 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 729088 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 729088 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 720896 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 720896 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 712704 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 712704 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 712704 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 704512 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 704512 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 696320 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 688128 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 688128 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 679936 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 679936 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 679936 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 671744 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 671744 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 655360 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 655360 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 655360 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 647168 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 647168 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 638976 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 638976 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 638976 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 630784 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 630784 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 630784 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 622592 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 622592 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 630784 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 622592 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 614400 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 606208 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 606208 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 606208 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 598016 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 598016 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 589824 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 589824 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 589824 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 573440 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 573440 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 565248 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 557056 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 557056 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 548864 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 548864 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 540672 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 540672 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 540672 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 524288 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 524288 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 507904 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 507904 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 507904 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 491520 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 491520 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 483328 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 483328 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 475136 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 466944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 466944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 466944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 466944 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 458752 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 458752 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 450560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 450560 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 442368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 442368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 442368 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 434176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 434176 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 425984 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 425984 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 417792 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 409600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 409600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 409600 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 401408 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 401408 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 401408 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 393216 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 385024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 385024 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 376832 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 368640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 368640 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 360448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 360448 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 352256 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 344064 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 344064 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 335872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 335872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 335872 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 327680 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 327680 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 319488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 319488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 319488 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 311296 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 311296 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 303104 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 303104 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 303104 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 294912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 294912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 294912 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 286720 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 278528 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 270336 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 270336 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 270336 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 262144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 262144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 262144 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 253952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 253952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 253952 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 245760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 245760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 245760 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 237568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15107 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 237568 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 229376 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 229376 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 221184 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 221184 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 221184 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 212992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 212992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 212992 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 204800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 204800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 204800 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 196608 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 196608 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 188416 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 188416 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 180224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 180224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 180224 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 172032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 172032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 172032 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 163840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 163840 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 155648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 155648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 155648 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 147456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 147456 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 122880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 131072 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 122880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 122880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 122880 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 5482 writes, 23K keys, 5482 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5482 writes, 769 syncs, 7.13 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5482 writes, 23K keys, 5482 commit groups, 1.0 writes per commit group, ingest: 18.33 MB, 0.03 MB/s#012Interval WAL: 5482 writes, 769 syncs, 7.13 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 57344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 57344 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 49152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 49152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 49152 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 40960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 40960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 40960 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 32768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 32768 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 24576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 24576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 24576 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 16384 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 8192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 8192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 8192 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 0 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 0 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 328.941497803s of 328.962036133s, submitted: 6
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 499712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 262144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 262144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 262144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 212992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 212992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 163840 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 147456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 147456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 90112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 1089536 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1081344 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 1064960 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 1048576 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 24 13:59:43 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1836565426' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 950272 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.2 total, 600.0 interval#012Cumulative writes: 5662 writes, 23K keys, 5662 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5662 writes, 859 syncs, 6.59 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.036       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55685d92add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 851968 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.769165039s of 600.112915039s, submitted: 90
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 1744896 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xab97b/0x163000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826256 data_alloc: 218103808 data_used: 212992
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 120 handle_osd_map epochs [121,122], i have 120, src has [1,122]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 375.062835693s of 375.391784668s, submitted: 90
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 9699328 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcab4000/0x0/0x4ffc00000, data 0xaf0c9/0x169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70025216 unmapped: 16957440 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 123 ms_handle_reset con 0x556861424400 session 0x556861a3e000
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 16941056 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbab4000/0x0/0x4ffc00000, data 0x10af0c9/0x1169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949338 data_alloc: 218103808 data_used: 221184
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 123 heartbeat osd_stat(store_statfs(0x4fbab0000/0x0/0x4ffc00000, data 0x10b0c85/0x116d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 16809984 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 124 handle_osd_map epochs [124,124], i have 124, src has [1,124]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 124 ms_handle_reset con 0x55685fa43c00 session 0x556861a3e1e0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 16613376 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fbaaa000/0x0/0x4ffc00000, data 0x10b2851/0x1172000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959866 data_alloc: 218103808 data_used: 221184
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959866 data_alloc: 218103808 data_used: 221184
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959866 data_alloc: 218103808 data_used: 221184
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 16596992 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960026 data_alloc: 218103808 data_used: 225280
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960026 data_alloc: 218103808 data_used: 225280
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960026 data_alloc: 218103808 data_used: 225280
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42b4/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 16572416 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.260654449s of 35.765483856s, submitted: 60
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 16531456 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 126 ms_handle_reset con 0x55685ecad000 session 0x556861bab0e0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fbaa8000/0x0/0x4ffc00000, data 0x10b42d7/0x1176000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fbaa4000/0x0/0x4ffc00000, data 0x10b5e54/0x1179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70451200 unmapped: 16531456 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964309 data_alloc: 218103808 data_used: 233472
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 16498688 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 127 ms_handle_reset con 0x55685fa43c00 session 0x556861bab680
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 16416768 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 16416768 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 15368192 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 128 ms_handle_reset con 0x55685fee0c00 session 0x55685ff2cf00
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 15376384 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971712 data_alloc: 218103808 data_used: 249856
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 128 heartbeat osd_stat(store_statfs(0x4fba9e000/0x0/0x4ffc00000, data 0x10b999b/0x117f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 15376384 heap: 86982656 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80322560 unmapped: 15056896 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 22200320 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.523596764s of 10.044019699s, submitted: 88
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 129 ms_handle_reset con 0x556861424400 session 0x556861a3fc20
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 21004288 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 130 ms_handle_reset con 0x55685fee1000 session 0x55685f1890e0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 130 ms_handle_reset con 0x55685ecad400 session 0x55685ff31e00
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 131 ms_handle_reset con 0x55685fee1c00 session 0x556861b321e0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 131 ms_handle_reset con 0x55685ecad400 session 0x556861b42960
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 131 heartbeat osd_stat(store_statfs(0x4f8a8c000/0x0/0x4ffc00000, data 0x40bfbe2/0x4190000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 20930560 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334958 data_alloc: 218103808 data_used: 266240
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 131 ms_handle_reset con 0x55685fee1000 session 0x556861b33860
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 131 ms_handle_reset con 0x556861424400 session 0x556861b42780
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 19914752 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 132 ms_handle_reset con 0x55685fee0c00 session 0x55685f0854a0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 132 ms_handle_reset con 0x55685fa43c00 session 0x556861b42f00
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f8a8a000/0x0/0x4ffc00000, data 0x40bfc15/0x4192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 19832832 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 133 ms_handle_reset con 0x55685fee1000 session 0x556861b5f4a0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 133 ms_handle_reset con 0x55685ecad400 session 0x55685f048960
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 133 ms_handle_reset con 0x55685fee1c00 session 0x556861b43860
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 133 ms_handle_reset con 0x556861424400 session 0x556861b5f680
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 133 ms_handle_reset con 0x55685fa43c00 session 0x55685f049c20
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 18759680 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 134 ms_handle_reset con 0x55685ecad400 session 0x556861b5fa40
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fba7d000/0x0/0x4ffc00000, data 0x10c5d75/0x119e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 18751488 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 135 ms_handle_reset con 0x55685fee1000 session 0x55685ff2cf00
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 18718720 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 136 ms_handle_reset con 0x55685fee1c00 session 0x55685f0854a0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030796 data_alloc: 218103808 data_used: 266240
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 18628608 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 18595840 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 137 ms_handle_reset con 0x556861424800 session 0x556861b74f00
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 137 ms_handle_reset con 0x5568610d1000 session 0x55685e9225a0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 17547264 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fba77000/0x0/0x4ffc00000, data 0x10cafaa/0x11a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 17547264 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.963214874s of 11.130927086s, submitted: 311
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 138 ms_handle_reset con 0x55685ecad000 session 0x556861b8c780
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 17514496 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fba73000/0x0/0x4ffc00000, data 0x10cda61/0x11a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042431 data_alloc: 218103808 data_used: 278528
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 140 ms_handle_reset con 0x55685ecad400 session 0x556861b8cf00
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 17448960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 17448960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 141 ms_handle_reset con 0x55685fa42800 session 0x556861b8dc20
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 141 ms_handle_reset con 0x55685fa43c00 session 0x55686331a1e0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 17309696 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fba73000/0x0/0x4ffc00000, data 0x10d035e/0x11aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 142 ms_handle_reset con 0x55685ecad400 session 0x556861a32b40
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 17170432 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 143 ms_handle_reset con 0x5568610d1000 session 0x556861b321e0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 143 ms_handle_reset con 0x55685ecad000 session 0x55686331a780
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 143 ms_handle_reset con 0x55685fee1c00 session 0x55686331af00
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 143 ms_handle_reset con 0x55685fa42800 session 0x556860d0c1e0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 17203200 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053039 data_alloc: 218103808 data_used: 290816
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 17170432 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fba6b000/0x0/0x4ffc00000, data 0x10d590a/0x11b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 17170432 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17162240 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17162240 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17162240 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053039 data_alloc: 218103808 data_used: 290816
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fba6b000/0x0/0x4ffc00000, data 0x10d590a/0x11b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 144 handle_osd_map epochs [145,145], i have 145, src has [1,145]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.896071434s of 11.615738869s, submitted: 215
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17145856 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17145856 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17145856 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17145856 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 145 handle_osd_map epochs [146,146], i have 146, src has [1,146]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 17137664 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 147 ms_handle_reset con 0x55685fa42800 session 0x55686331b680
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063545 data_alloc: 218103808 data_used: 290816
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 17080320 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fba62000/0x0/0x4ffc00000, data 0x10dab66/0x11ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 17096704 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 17096704 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 17096704 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 148 handle_osd_map epochs [148,148], i have 148, src has [1,148]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 148 ms_handle_reset con 0x556861a5e800 session 0x55686331bc20
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 148 ms_handle_reset con 0x55685fee1000 session 0x556861a3e960
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 17014784 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 149 ms_handle_reset con 0x556861a5e400 session 0x556861a32b40
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1070073 data_alloc: 218103808 data_used: 307200
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fba5c000/0x0/0x4ffc00000, data 0x10de2b4/0x11c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.863252640s of 10.083137512s, submitted: 82
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 149 ms_handle_reset con 0x55685fc06c00 session 0x556863375c20
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 16809984 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 149 handle_osd_map epochs [150,150], i have 150, src has [1,150]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 150 ms_handle_reset con 0x55685fa42800 session 0x55686331ad20
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 16809984 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 16801792 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 151 ms_handle_reset con 0x55685fee1000 session 0x5568633743c0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16793600 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fba56000/0x0/0x4ffc00000, data 0x10e1a95/0x11c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 152 ms_handle_reset con 0x556861a5e400 session 0x55686331a000
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16793600 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fba51000/0x0/0x4ffc00000, data 0x10e366d/0x11cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082688 data_alloc: 218103808 data_used: 307200
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16793600 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16793600 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16793600 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16793600 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 152 ms_handle_reset con 0x556861a5e800 session 0x5568633743c0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x556861ae1400 session 0x5568633a6000
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x556861ae1000 session 0x55686331b680
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x55685fa42800 session 0x55686331ad20
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x55685fee1c00 session 0x556861a3e960
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x55685fee1000 session 0x556860d0c1e0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x556861a5fc00 session 0x5568632121e0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fba4e000/0x0/0x4ffc00000, data 0x10e512b/0x11cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 16744448 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fba4e000/0x0/0x4ffc00000, data 0x10e512b/0x11cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086691 data_alloc: 218103808 data_used: 315392
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 16744448 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.245035172s of 10.595973969s, submitted: 74
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x55685fa42800 session 0x5568632125a0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78938112 unmapped: 16441344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78938112 unmapped: 16441344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 153 ms_handle_reset con 0x556861ae1000 session 0x55685f04a960
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78938112 unmapped: 16441344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fba2b000/0x0/0x4ffc00000, data 0x110914a/0x11f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 154 ms_handle_reset con 0x556861a5e800 session 0x55685f250780
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 154 ms_handle_reset con 0x556861a5d000 session 0x5568632130e0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 154 ms_handle_reset con 0x556861a5e400 session 0x556861b752c0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 154 ms_handle_reset con 0x55685fa42800 session 0x556861b42d20
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 16400384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094730 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 155 ms_handle_reset con 0x556861a5d000 session 0x556861528000
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 16400384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 156 ms_handle_reset con 0x556861a5e800 session 0x556861bab860
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 16400384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 156 ms_handle_reset con 0x556861ae1000 session 0x556861b8c3c0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 156 ms_handle_reset con 0x556861a5d400 session 0x556861b8c1e0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 156 ms_handle_reset con 0x556861a5d400 session 0x556861ab4f00
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 16400384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 16400384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fba1e000/0x0/0x4ffc00000, data 0x110e473/0x11fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 156 ms_handle_reset con 0x55685fa42800 session 0x556861ab4d20
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 16515072 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100812 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 16498688 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 157 ms_handle_reset con 0x556861a5d000 session 0x5568615290e0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 16498688 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 157 ms_handle_reset con 0x55685fee1000 session 0x556863212f00
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.930842400s of 11.172493935s, submitted: 53
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 157 ms_handle_reset con 0x55685fee1c00 session 0x5568633a61e0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fba1e000/0x0/0x4ffc00000, data 0x111001e/0x11ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [0,0,1])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 16490496 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 158 ms_handle_reset con 0x55685fa42800 session 0x5568632132c0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78905344 unmapped: 16474112 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78905344 unmapped: 16474112 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106820 data_alloc: 218103808 data_used: 327680
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 16457728 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 159 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x10ef625/0x11df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 16457728 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 159 ms_handle_reset con 0x55685ecad000 session 0x55686331ba40
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 159 ms_handle_reset con 0x55685ecad400 session 0x556861bab0e0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 159 ms_handle_reset con 0x55685fee1000 session 0x5568633a6780
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 16457728 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 16457728 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 159 ms_handle_reset con 0x55685fee1c00 session 0x5568633a6b40
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 160 ms_handle_reset con 0x55685ecad000 session 0x5568633a72c0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108734 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fba3b000/0x0/0x4ffc00000, data 0x10f122e/0x11e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108734 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fba3b000/0x0/0x4ffc00000, data 0x10f122e/0x11e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.445519447s of 13.752939224s, submitted: 101
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 7658 writes, 29K keys, 7658 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7658 writes, 1723 syncs, 4.44 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1996 writes, 5287 keys, 1996 commit groups, 1.0 writes per commit group, ingest: 2.75 MB, 0.00 MB/s#012Interval WAL: 1996 writes, 864 syncs, 2.31 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 16424960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: mgrc ms_handle_reset ms_handle_reset con 0x55685f53c000
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/536471675
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/536471675,v1:192.168.122.100:6801/536471675]
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: mgrc handle_mgr_configure stats_period=5
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] _maybe_adjust
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 16318464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 16146432 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'config diff' '{prefix=config diff}'
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'config show' '{prefix=config show}'
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79536128 unmapped: 15843328 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 15564800 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 15515648 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'log dump' '{prefix=log dump}'
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'perf dump' '{prefix=perf dump}'
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'perf schema' '{prefix=perf schema}'
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79831040 unmapped: 15548416 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79904768 unmapped: 15474688 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 ms_handle_reset con 0x55685fee0400 session 0x55685fab10e0
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 15466496 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 15466496 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 15466496 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 15466496 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 15466496 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 15466496 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 15466496 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111532 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb628000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 15466496 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 110.551521301s of 110.563240051s, submitted: 58
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79929344 unmapped: 15450112 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 15425536 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 15417344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 15417344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 15417344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 15417344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 15417344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 15417344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 15417344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 15417344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 15417344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 15417344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 15417344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79962112 unmapped: 15417344 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79970304 unmapped: 15409152 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 15400960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 15400960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 15400960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 15400960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 15400960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 15400960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 15400960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 15400960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 15400960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 15400960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 15400960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 15400960 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79986688 unmapped: 15392768 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 15384576 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 15376384 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 15360000 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 15360000 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 15360000 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 15360000 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 15360000 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 15360000 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 15360000 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80019456 unmapped: 15360000 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 15351808 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 15343616 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 15335424 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 15327232 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 15327232 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 15327232 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 15327232 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 15319040 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 15302656 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 15294464 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 15286272 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 15310848 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: osd.2 161 heartbeat osd_stat(store_statfs(0x4fb629000/0x0/0x4ffc00000, data 0x10f2c91/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'config diff' '{prefix=config diff}'
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: bluestore.MempoolThread(0x55685da09b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110652 data_alloc: 218103808 data_used: 331776
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'config show' '{prefix=config show}'
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 15007744 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 14958592 heap: 95379456 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:43 np0005533938 ceph-osd[90655]: do_command 'log dump' '{prefix=log dump}'
Nov 24 13:59:43 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15111 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:59:43 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 24 13:59:43 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3830832788' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 24 13:59:43 np0005533938 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 24 13:59:44 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15115 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:59:44 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1356: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:59:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 24 13:59:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1798673722' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 24 13:59:44 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15119 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 24 13:59:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 24 13:59:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2057798993' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 24 13:59:44 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15123 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 13:59:44 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 24 13:59:44 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4047101108' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 24 13:59:45 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15127 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 13:59:45 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 24 13:59:45 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2701745305' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 24 13:59:45 np0005533938 ceph-mgr[75218]: log_channel(audit) log [DBG] : from='client.15135 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 24 13:59:45 np0005533938 ceph-e5ee928f-099b-569b-93c9-ecf025cbb50d-mgr-compute-0-dfqptp[75214]: 2025-11-24T18:59:45.898+0000 7f6377bb5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 24 13:59:45 np0005533938 ceph-mgr[75218]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 24 13:59:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 24 13:59:46 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4073264559' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 24 13:59:46 np0005533938 ceph-mgr[75218]: log_channel(cluster) log [DBG] : pgmap v1357: 321 pgs: 321 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail
Nov 24 13:59:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 24 13:59:46 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1201912732' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 24 13:59:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 24 13:59:46 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1600982427' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 24 13:59:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 24 13:59:46 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1809585107' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 24 13:59:46 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 24 13:59:46 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3146791330' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 24 13:59:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 24 13:59:47 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1319776605' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 24 13:59:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 24 13:59:47 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/46257218' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 180224 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 180224 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 172032 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 172032 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 172032 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 163840 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 163840 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 155648 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 155648 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 147456 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 147456 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 147456 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 139264 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 139264 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 131072 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 131072 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 114688 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 114688 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 106496 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 106496 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 106496 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 106496 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 106496 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 106496 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 98304 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 98304 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 98304 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 90112 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 90112 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 81920 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 81920 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 73728 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 73728 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 65536 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 65536 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 65536 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 57344 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 57344 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 49152 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 49152 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 40960 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 40960 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 40960 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 32768 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 32768 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 24576 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 24576 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 16384 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 16384 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 16384 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 8192 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 8192 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 0 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 0 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 0 heap: 74473472 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1040384 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1040384 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 1032192 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 1032192 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 1032192 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 1024000 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1015808 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 1007616 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 1007616 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 999424 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 999424 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 991232 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 991232 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 991232 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 983040 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 983040 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 974848 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 974848 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 974848 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 966656 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 966656 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 950272 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 950272 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 942080 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 942080 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 942080 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 933888 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 933888 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 925696 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 925696 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 917504 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 917504 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 909312 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 909312 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 901120 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 901120 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 892928 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 884736 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 884736 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 876544 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 876544 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 876544 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 868352 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 868352 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 860160 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 860160 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 843776 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 843776 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 843776 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 835584 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 835584 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 819200 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 819200 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 819200 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 811008 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 811008 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 811008 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 802816 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 802816 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 794624 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 794624 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 786432 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 786432 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 786432 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 778240 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 778240 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 770048 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 770048 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 770048 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 761856 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 761856 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 753664 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 753664 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 745472 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 745472 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 745472 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 737280 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 737280 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 729088 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 729088 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 720896 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 720896 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 720896 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 712704 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 712704 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 704512 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 704512 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 696320 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 696320 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 696320 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 688128 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 688128 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 688128 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 679936 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 679936 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 671744 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 663552 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 655360 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 655360 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 655360 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 647168 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 655360 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 647168 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 647168 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 647168 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 638976 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 622592 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 622592 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 614400 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 614400 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 606208 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 606208 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 598016 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 598016 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 598016 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 589824 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 589824 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 581632 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 581632 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 573440 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 573440 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 573440 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 565248 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 565248 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 557056 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 557056 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 548864 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 548864 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 548864 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 540672 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 540672 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 540672 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 532480 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 532480 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 524288 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 524288 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 516096 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 516096 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 516096 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 507904 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 507904 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 499712 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 499712 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 491520 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 491520 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 491520 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 475136 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 475136 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 466944 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 466944 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 458752 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 458752 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 458752 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 450560 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 450560 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 434176 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 434176 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 434176 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 425984 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 425984 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 417792 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 417792 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 417792 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 409600 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 409600 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 401408 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 401408 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 401408 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 393216 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 393216 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 385024 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 601.0 total, 600.0 interval#012Cumulative writes: 6505 writes, 27K keys, 6505 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6505 writes, 1119 syncs, 5.81 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6505 writes, 27K keys, 6505 commit groups, 1.0 writes per commit group, ingest: 19.27 MB, 0.03 MB/s#012Interval WAL: 6505 writes, 1119 syncs, 5.81 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 319488 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 311296 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 311296 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 303104 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 303104 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 303104 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 294912 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 294912 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 294912 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 286720 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 278528 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 278528 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 278528 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 270336 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 270336 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 270336 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 262144 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 262144 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 253952 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 245760 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 237568 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 237568 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 237568 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 229376 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 229376 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 212992 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 221184 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 212992 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 212992 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 204800 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 204800 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 196608 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75325440 unmapped: 196608 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 188416 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 188416 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 180224 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 180224 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 180224 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 172032 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 180224 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 180224 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 172032 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 172032 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 163840 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 163840 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 155648 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 155648 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 147456 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 147456 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 147456 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 139264 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 139264 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 131072 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 131072 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 131072 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 122880 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 122880 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 114688 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 114688 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 106496 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 106496 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75415552 unmapped: 106496 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 98304 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 98304 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 98304 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 90112 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 90112 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 81920 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 81920 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 81920 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 73728 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 73728 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 65536 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 310.267303467s of 310.278106689s, submitted: 2
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 57344 heap: 75522048 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 2023424 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 2015232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 2015232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 2015232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 2023424 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 2023424 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 2023424 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 2023424 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 2015232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 2015232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75603968 unmapped: 2015232 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 2007040 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 2007040 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75620352 unmapped: 1998848 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 1990656 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 1990656 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 1982464 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 1982464 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 1974272 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 1974272 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 1974272 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 1966080 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 1966080 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 1957888 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 1957888 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 1949696 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 1949696 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 1949696 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 1941504 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 1941504 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 1933312 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 1933312 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 1933312 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 1925120 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 1908736 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 1900544 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 1900544 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 1900544 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 1892352 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 1892352 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 1884160 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1875968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1875968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1875968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1875968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1875968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 1875968 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 1867776 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 1859584 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 1843200 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 1843200 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 1843200 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 1835008 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1826816 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1818624 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 1810432 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 1802240 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 1802240 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 1794048 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1785856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1785856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1785856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1785856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1785856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1785856 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 1777664 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1769472 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1769472 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1769472 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1769472 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1761280 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1753088 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1753088 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1753088 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1753088 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1753088 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1744896 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 ms_handle_reset con 0x560b41ff1400 session 0x560b413a9860
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 ms_handle_reset con 0x560b42032000 session 0x560b41b383c0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1736704 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 1728512 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 1728512 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 1728512 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 1728512 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 1720320 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 1712128 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1687552 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1679360 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 1671168 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 1662976 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 1654784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 1654784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 1654784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 1654784 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1646592 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 24 13:59:47 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2328258631' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 1630208 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 1622016 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 1622016 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 1622016 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 1622016 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 1622016 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 1622016 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 1613824 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 1605632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 1605632 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 1597440 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 1589248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 1589248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 1589248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 1589248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 1589248 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1201.0 total, 600.0 interval#012Cumulative writes: 6685 writes, 27K keys, 6685 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6685 writes, 1209 syncs, 5.53 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.36              0.00         1    0.365       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560b405cf1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1556480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1556480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1556480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1556480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1556480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1556480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 1556480 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 1548288 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 1548288 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1540096 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 1507328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 1507328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 1507328 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 599.852172852s of 600.152648926s, submitted: 90
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 1499136 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 1490944 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 1482752 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 1474560 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 1466368 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 1466368 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 1466368 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1458176 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1458176 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1458176 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1458176 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1458176 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1458176 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 1458176 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 1449984 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 1441792 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 1433600 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 1425408 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 1417216 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848124 data_alloc: 218103808 data_used: 176128
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 1409024 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca50000/0x0/0x4ffc00000, data 0x11ae91/0x1ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 373.959564209s of 374.288970947s, submitted: 90
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 120 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 1351680 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 1277952 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 860549 data_alloc: 218103808 data_used: 184320
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca47000/0x0/0x4ffc00000, data 0x11e601/0x1d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 1277952 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 1277952 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 122 handle_osd_map epochs [123,123], i have 123, src has [1,123]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 123 ms_handle_reset con 0x560b43e19800 session 0x560b44f00d20
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1261568 heap: 77619200 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x12019a/0x1d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 17907712 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 124 ms_handle_reset con 0x560b42033000 session 0x560b44f1d0e0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978265 data_alloc: 218103808 data_used: 188416
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fba3f000/0x0/0x4ffc00000, data 0x1121d56/0x11dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fba3f000/0x0/0x4ffc00000, data 0x1121d56/0x11dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980391 data_alloc: 218103808 data_used: 188416
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980391 data_alloc: 218103808 data_used: 188416
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17924096 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980391 data_alloc: 218103808 data_used: 188416
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980391 data_alloc: 218103808 data_used: 188416
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980391 data_alloc: 218103808 data_used: 188416
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba3d000/0x0/0x4ffc00000, data 0x11237b9/0x11e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 17915904 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.974910736s of 36.823040009s, submitted: 49
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984850 data_alloc: 218103808 data_used: 188416
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 17891328 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 126 ms_handle_reset con 0x560b42033400 session 0x560b44f1dc20
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fba3c000/0x0/0x4ffc00000, data 0x1123fc9/0x11e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 16834560 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fba37000/0x0/0x4ffc00000, data 0x1125b69/0x11e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 16818176 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 127 ms_handle_reset con 0x560b41ff1c00 session 0x560b44f30f00
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 15769600 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fba37000/0x0/0x4ffc00000, data 0x1126f07/0x11e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 15769600 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994861 data_alloc: 218103808 data_used: 196608
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 15753216 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 128 ms_handle_reset con 0x560b42033000 session 0x560b44f00d20
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 15728640 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 15728640 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 15720448 heap: 94404608 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 87212032 unmapped: 15589376 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 128 heartbeat osd_stat(store_statfs(0x4faa31000/0x0/0x4ffc00000, data 0x2129478/0x21ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.644784927s of 10.040460587s, submitted: 85
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 129 ms_handle_reset con 0x560b43e19800 session 0x560b44e890e0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223427 data_alloc: 218103808 data_used: 208896
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 23822336 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 130 ms_handle_reset con 0x560b42d4dc00 session 0x560b4295e3c0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 130 ms_handle_reset con 0x560b43cae000 session 0x560b44e892c0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 23724032 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 131 ms_handle_reset con 0x560b43cb7000 session 0x560b44e890e0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 131 ms_handle_reset con 0x560b41ff1c00 session 0x560b439f4780
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 131 ms_handle_reset con 0x560b43e19800 session 0x560b44f1dc20
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 131 ms_handle_reset con 0x560b43e2b000 session 0x560b41f812c0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 22773760 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 132 ms_handle_reset con 0x560b42d4dc00 session 0x560b44d8c000
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 132 ms_handle_reset con 0x560b42033000 session 0x560b4221d4a0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 22708224 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb21d000/0x0/0x4ffc00000, data 0x1131362/0x11fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 133 ms_handle_reset con 0x560b41ff1c00 session 0x560b439fc3c0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 133 ms_handle_reset con 0x560b42d4dc00 session 0x560b44d8d680
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 133 ms_handle_reset con 0x560b43cb7000 session 0x560b44f1c960
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 133 ms_handle_reset con 0x560b43e19800 session 0x560b44569c20
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 133 ms_handle_reset con 0x560b42033000 session 0x560b4295e3c0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 22740992 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 134 ms_handle_reset con 0x560b41ff1c00 session 0x560b41f8e3c0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041733 data_alloc: 218103808 data_used: 241664
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80068608 unmapped: 22732800 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 135 ms_handle_reset con 0x560b43cb7000 session 0x560b420e7c20
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 22700032 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 136 ms_handle_reset con 0x560b43e19800 session 0x560b4221cb40
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fba12000/0x0/0x4ffc00000, data 0x1137add/0x1208000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 22618112 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 22609920 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 137 ms_handle_reset con 0x560b43e2b000 session 0x560b44ce52c0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 137 ms_handle_reset con 0x560b43ae1c00 session 0x560b44568960
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 22577152 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.287572861s of 10.136335373s, submitted: 244
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051198 data_alloc: 218103808 data_used: 249856
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 22577152 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 138 ms_handle_reset con 0x560b41ff1c00 session 0x560b44d525a0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fba0f000/0x0/0x4ffc00000, data 0x113b2dd/0x120d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 22519808 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 21413888 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 140 ms_handle_reset con 0x560b42033000 session 0x560b446f10e0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 21372928 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fba08000/0x0/0x4ffc00000, data 0x113e1d0/0x1212000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 141 ms_handle_reset con 0x560b43cb7000 session 0x560b44d8c3c0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 141 ms_handle_reset con 0x560b43e19800 session 0x560b4214bc20
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81510400 unmapped: 21291008 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 142 ms_handle_reset con 0x560b43ae1c00 session 0x560b4463ef00
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063961 data_alloc: 218103808 data_used: 266240
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 21241856 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1141b0a/0x1217000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 143 ms_handle_reset con 0x560b42033000 session 0x560b44d8c000
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 143 ms_handle_reset con 0x560b41ff1c00 session 0x560b44f31c20
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 143 ms_handle_reset con 0x560b43e21800 session 0x560b44f314a0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 143 ms_handle_reset con 0x560b43cb7000 session 0x560b4463f4a0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 21209088 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 21159936 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 21159936 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 21159936 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071603 data_alloc: 218103808 data_used: 274432
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 21143552 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 21143552 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb5f0000/0x0/0x4ffc00000, data 0x1144a42/0x121d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.284852028s of 11.915773392s, submitted: 186
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 21143552 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 21143552 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 145 ms_handle_reset con 0x560b41ff1c00 session 0x560b44d8c5a0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 145 ms_handle_reset con 0x560b42033000 session 0x560b44ce4960
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 21143552 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 145 ms_handle_reset con 0x560b43ae1c00 session 0x560b447c4780
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074577 data_alloc: 218103808 data_used: 274432
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fb5ed000/0x0/0x4ffc00000, data 0x114653d/0x1220000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 21135360 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fb5ed000/0x0/0x4ffc00000, data 0x114653d/0x1220000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 21110784 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 147 ms_handle_reset con 0x560b43e21800 session 0x560b421bcf00
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 21094400 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81715200 unmapped: 21086208 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb5e6000/0x0/0x4ffc00000, data 0x1149c8b/0x1226000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 21069824 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082986 data_alloc: 218103808 data_used: 278528
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 21069824 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 148 ms_handle_reset con 0x560b42e88800 session 0x560b44f1d0e0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 148 ms_handle_reset con 0x560b42e88400 session 0x560b4463fa40
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 21053440 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-mon[74927]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 24 13:59:47 np0005533938 ceph-mon[74927]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4250234144' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.870351791s of 10.022338867s, submitted: 58
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 149 ms_handle_reset con 0x560b41ff1c00 session 0x560b4463fe00
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 21028864 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 149 ms_handle_reset con 0x560b42033000 session 0x560b44df10e0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 149 ms_handle_reset con 0x560b43ae1c00 session 0x560b44ef4000
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fb5e0000/0x0/0x4ffc00000, data 0x114d3f7/0x122d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 149 handle_osd_map epochs [150,150], i have 150, src has [1,150]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 21061632 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb5e0000/0x0/0x4ffc00000, data 0x114d3f7/0x122d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 21045248 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 151 ms_handle_reset con 0x560b43e21800 session 0x560b44ef5860
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095676 data_alloc: 218103808 data_used: 286720
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 19996672 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb5dd000/0x0/0x4ffc00000, data 0x1150b87/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 152 ms_handle_reset con 0x560b41ff1c00 session 0x560b44f1ed20
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 19980288 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x1152b3c/0x1235000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 19980288 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 19963904 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 19963904 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x1152b3c/0x1235000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103151 data_alloc: 218103808 data_used: 294912
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 19963904 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 152 ms_handle_reset con 0x560b42033000 session 0x560b44df0b40
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 152 ms_handle_reset con 0x560b42e88400 session 0x560b4295e3c0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 152 ms_handle_reset con 0x560b43ae1c00 session 0x560b44d8cd20
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 152 ms_handle_reset con 0x560b406a1c00 session 0x560b44d8cf00
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 153 ms_handle_reset con 0x560b42033000 session 0x560b41f834a0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 153 ms_handle_reset con 0x560b41ff1c00 session 0x560b44f31e00
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 19939328 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 153 ms_handle_reset con 0x560b42e88400 session 0x560b44df10e0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 153 ms_handle_reset con 0x560b43ae1c00 session 0x560b44ef5860
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 153 ms_handle_reset con 0x560b42d4dc00 session 0x560b4545c000
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 153 ms_handle_reset con 0x560b41ff1c00 session 0x560b4545c780
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x1152b3c/0x1235000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 20160512 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.832967758s of 11.620203972s, submitted: 124
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 19972096 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fb5d5000/0x0/0x4ffc00000, data 0x11545d7/0x1238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 19972096 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107738 data_alloc: 218103808 data_used: 299008
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 19972096 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 153 ms_handle_reset con 0x560b42e88400 session 0x560b44f1c3c0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 154 ms_handle_reset con 0x560b43ae1c00 session 0x560b447c4f00
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 19963904 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 154 ms_handle_reset con 0x560b43cba800 session 0x560b44755e00
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 154 ms_handle_reset con 0x560b43b03400 session 0x560b421bcf00
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 154 ms_handle_reset con 0x560b43b03400 session 0x560b44f30f00
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fb5d6000/0x0/0x4ffc00000, data 0x11545d7/0x1238000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 155 ms_handle_reset con 0x560b41ff1c00 session 0x560b44df03c0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 19931136 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 156 ms_handle_reset con 0x560b43ae1400 session 0x560b42803c20
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 19898368 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 156 ms_handle_reset con 0x560b43ae1c00 session 0x560b44d8de00
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 156 ms_handle_reset con 0x560b43cba800 session 0x560b44d8c780
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 19906560 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 156 ms_handle_reset con 0x560b41ff1c00 session 0x560b42cebe00
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb5ca000/0x0/0x4ffc00000, data 0x11598ce/0x1242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120481 data_alloc: 218103808 data_used: 315392
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 19906560 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 156 ms_handle_reset con 0x560b43ae1400 session 0x560b44ef4960
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 19881984 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb5cc000/0x0/0x4ffc00000, data 0x11598ce/0x1242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [0,0,0,0,1])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 18833408 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 157 ms_handle_reset con 0x560b43ae1c00 session 0x560b44f1c000
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82935808 unmapped: 19865600 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.018726349s of 10.783482552s, submitted: 110
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 157 ms_handle_reset con 0x560b42033000 session 0x560b4545d2c0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 19849216 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125112 data_alloc: 218103808 data_used: 323584
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 19841024 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 158 ms_handle_reset con 0x560b43b03400 session 0x560b4545de00
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 158 handle_osd_map epochs [158,159], i have 158, src has [1,159]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82968576 unmapped: 19832832 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 159 heartbeat osd_stat(store_statfs(0x4fb5c7000/0x0/0x4ffc00000, data 0x115d098/0x1247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82984960 unmapped: 19816448 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82984960 unmapped: 19816448 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 159 ms_handle_reset con 0x560b428f6000 session 0x560b44d8d2c0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 159 ms_handle_reset con 0x560b43cafc00 session 0x560b4221d4a0
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 82984960 unmapped: 19816448 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 159 ms_handle_reset con 0x560b41ff1c00 session 0x560b41f82d20
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128896 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 159 heartbeat osd_stat(store_statfs(0x4fb5c4000/0x0/0x4ffc00000, data 0x115eb17/0x124a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 19791872 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 159 ms_handle_reset con 0x560b42033000 session 0x560b445cc960
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 159 ms_handle_reset con 0x560b43ae1400 session 0x560b4281fc20
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 19791872 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 19775488 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5c2000/0x0/0x4ffc00000, data 0x11606ed/0x124b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 19775488 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5c2000/0x0/0x4ffc00000, data 0x11606ed/0x124b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 19775488 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131316 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 19775488 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 160 handle_osd_map epochs [160,161], i have 160, src has [1,161]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.780132294s of 12.234910965s, submitted: 104
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1801.0 total, 600.0 interval#012Cumulative writes: 8591 writes, 32K keys, 8591 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8591 writes, 2012 syncs, 4.27 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1906 writes, 4972 keys, 1906 commit groups, 1.0 writes per commit group, ingest: 2.41 MB, 0.00 MB/s#012Interval WAL: 1906 writes, 803 syncs, 2.37 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 19759104 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: mgrc ms_handle_reset ms_handle_reset con 0x560b41415c00
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/536471675
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/536471675,v1:192.168.122.100:6801/536471675]
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: mgrc handle_mgr_configure stats_period=5
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 ms_handle_reset con 0x560b41ff0800 session 0x560b413a9680
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 ms_handle_reset con 0x560b41ff1800 session 0x560b44d52f00
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 ms_handle_reset con 0x560b42d4cc00 session 0x560b4278c000
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 19554304 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 19570688 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 19578880 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 19456000 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: do_command 'config diff' '{prefix=config diff}'
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: do_command 'config show' '{prefix=config show}'
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: do_command 'counter dump' '{prefix=counter dump}'
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: do_command 'counter schema' '{prefix=counter schema}'
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 ms_handle_reset con 0x560b42d4c400 session 0x560b43a1b860
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 18923520 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 18767872 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: do_command 'log dump' '{prefix=log dump}'
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 18759680 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: do_command 'perf dump' '{prefix=perf dump}'
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: do_command 'perf schema' '{prefix=perf schema}'
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 18538496 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 18538496 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134290 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 18538496 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 18538496 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5bf000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 18538496 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 112.251235962s of 112.263244629s, submitted: 53
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 18522112 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 18415616 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [0,0,1])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: osd.1 161 heartbeat osd_stat(store_statfs(0x4fb5c0000/0x0/0x4ffc00000, data 0x1162150/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: bluestore.MempoolThread(0x560b406adb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1133410 data_alloc: 218103808 data_used: 335872
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
Nov 24 13:59:47 np0005533938 ceph-osd[89581]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 18391040 heap: 102801408 old mem: 2845415832 new mem: 2845415832
